00:00:00.000 Started by upstream project "autotest-per-patch" build number 132309 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.142 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.143 The recommended git tool is: git 00:00:00.143 using credential 00000000-0000-0000-0000-000000000002 00:00:00.144 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.162 Fetching changes from the remote Git repository 00:00:00.163 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.178 Using shallow fetch with depth 1 00:00:00.178 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.178 > git --version # timeout=10 00:00:00.198 > git --version # 'git version 2.39.2' 00:00:00.198 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.216 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.216 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.275 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.287 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.298 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.298 > git config core.sparsecheckout # timeout=10 00:00:06.310 > git read-tree -mu HEAD # timeout=10 00:00:06.325 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.340 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.341 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.421 [Pipeline] Start of Pipeline 00:00:06.435 [Pipeline] library 00:00:06.436 Loading library shm_lib@master 00:00:06.437 Library shm_lib@master is cached. Copying from home. 00:00:06.454 [Pipeline] node 00:00:06.464 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.466 [Pipeline] { 00:00:06.475 [Pipeline] catchError 00:00:06.477 [Pipeline] { 00:00:06.489 [Pipeline] wrap 00:00:06.497 [Pipeline] { 00:00:06.505 [Pipeline] stage 00:00:06.507 [Pipeline] { (Prologue) 00:00:06.712 [Pipeline] sh 00:00:07.000 + logger -p user.info -t JENKINS-CI 00:00:07.018 [Pipeline] echo 00:00:07.020 Node: WFP8 00:00:07.027 [Pipeline] sh 00:00:07.332 [Pipeline] setCustomBuildProperty 00:00:07.343 [Pipeline] echo 00:00:07.345 Cleanup processes 00:00:07.350 [Pipeline] sh 00:00:07.635 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.635 2046505 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.646 [Pipeline] sh 00:00:07.930 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.930 ++ grep -v 'sudo pgrep' 00:00:07.930 ++ awk '{print $1}' 00:00:07.930 + sudo kill -9 00:00:07.930 + true 00:00:07.946 [Pipeline] cleanWs 00:00:07.957 [WS-CLEANUP] Deleting project workspace... 00:00:07.957 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.963 [WS-CLEANUP] done 00:00:07.966 [Pipeline] setCustomBuildProperty 00:00:07.979 [Pipeline] sh 00:00:08.264 + sudo git config --global --replace-all safe.directory '*' 00:00:08.361 [Pipeline] httpRequest 00:00:09.030 [Pipeline] echo 00:00:09.032 Sorcerer 10.211.164.20 is alive 00:00:09.042 [Pipeline] retry 00:00:09.044 [Pipeline] { 00:00:09.058 [Pipeline] httpRequest 00:00:09.062 HttpMethod: GET 00:00:09.062 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.063 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.066 Response Code: HTTP/1.1 200 OK 00:00:09.067 Success: Status code 200 is in the accepted range: 200,404 00:00:09.067 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.458 [Pipeline] } 00:00:10.474 [Pipeline] // retry 00:00:10.481 [Pipeline] sh 00:00:10.769 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.786 [Pipeline] httpRequest 00:00:11.117 [Pipeline] echo 00:00:11.119 Sorcerer 10.211.164.20 is alive 00:00:11.129 [Pipeline] retry 00:00:11.131 [Pipeline] { 00:00:11.145 [Pipeline] httpRequest 00:00:11.150 HttpMethod: GET 00:00:11.151 URL: http://10.211.164.20/packages/spdk_403bf887ac6ce76246bbf3c9eb1f45699885908f.tar.gz 00:00:11.151 Sending request to url: http://10.211.164.20/packages/spdk_403bf887ac6ce76246bbf3c9eb1f45699885908f.tar.gz 00:00:11.171 Response Code: HTTP/1.1 200 OK 00:00:11.171 Success: Status code 200 is in the accepted range: 200,404 00:00:11.171 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_403bf887ac6ce76246bbf3c9eb1f45699885908f.tar.gz 00:00:46.011 [Pipeline] } 00:00:46.030 [Pipeline] // retry 00:00:46.038 [Pipeline] sh 00:00:46.330 + tar --no-same-owner -xf spdk_403bf887ac6ce76246bbf3c9eb1f45699885908f.tar.gz 00:00:48.884 [Pipeline] sh 00:00:49.171 + git -C spdk log --oneline -n5 00:00:49.171 403bf887a nvmf: added support for add/delete host wrt referral 00:00:49.171 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:00:49.171 1a1586409 nvmf: use bdev's nsid for admin command passthru 00:00:49.171 892c29f49 nvmf: pass nsid to nvmf_ctrlr_identify_ns() 00:00:49.171 fb6c49f2f bdev: add spdk_bdev_get_nvme_nsid() 00:00:49.183 [Pipeline] } 00:00:49.197 [Pipeline] // stage 00:00:49.205 [Pipeline] stage 00:00:49.207 [Pipeline] { (Prepare) 00:00:49.223 [Pipeline] writeFile 00:00:49.238 [Pipeline] sh 00:00:49.524 + logger -p user.info -t JENKINS-CI 00:00:49.538 [Pipeline] sh 00:00:49.826 + logger -p user.info -t JENKINS-CI 00:00:49.838 [Pipeline] sh 00:00:50.122 + cat autorun-spdk.conf 00:00:50.122 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.122 SPDK_TEST_NVMF=1 00:00:50.122 SPDK_TEST_NVME_CLI=1 00:00:50.122 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.122 SPDK_TEST_NVMF_NICS=e810 00:00:50.122 SPDK_TEST_VFIOUSER=1 00:00:50.122 SPDK_RUN_UBSAN=1 00:00:50.122 NET_TYPE=phy 00:00:50.129 RUN_NIGHTLY=0 00:00:50.133 [Pipeline] readFile 00:00:50.159 [Pipeline] withEnv 00:00:50.162 [Pipeline] { 00:00:50.176 [Pipeline] sh 00:00:50.466 + set -ex 00:00:50.466 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:50.466 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:50.466 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.466 ++ SPDK_TEST_NVMF=1 00:00:50.466 ++ SPDK_TEST_NVME_CLI=1 00:00:50.466 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.466 ++ SPDK_TEST_NVMF_NICS=e810 00:00:50.466 ++ SPDK_TEST_VFIOUSER=1 00:00:50.466 ++ SPDK_RUN_UBSAN=1 00:00:50.466 ++ NET_TYPE=phy 00:00:50.466 ++ RUN_NIGHTLY=0 00:00:50.466 + case $SPDK_TEST_NVMF_NICS in 00:00:50.466 + DRIVERS=ice 00:00:50.466 + [[ tcp == \r\d\m\a ]] 00:00:50.466 + [[ -n ice ]] 00:00:50.466 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:50.466 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:53.762 rmmod: ERROR: Module irdma is not currently loaded 00:00:53.762 rmmod: ERROR: Module i40iw is not currently loaded 00:00:53.762 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:53.762 + true 00:00:53.762 + for D in $DRIVERS 00:00:53.762 + sudo modprobe ice 00:00:53.762 + exit 0 00:00:53.772 [Pipeline] } 00:00:53.786 [Pipeline] // withEnv 00:00:53.792 [Pipeline] } 00:00:53.805 [Pipeline] // stage 00:00:53.815 [Pipeline] catchError 00:00:53.817 [Pipeline] { 00:00:53.831 [Pipeline] timeout 00:00:53.831 Timeout set to expire in 1 hr 0 min 00:00:53.833 [Pipeline] { 00:00:53.847 [Pipeline] stage 00:00:53.850 [Pipeline] { (Tests) 00:00:53.864 [Pipeline] sh 00:00:54.152 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.152 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.152 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.152 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:54.152 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:54.152 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:54.152 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:54.152 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:54.152 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:54.152 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:54.152 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:54.152 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.152 + source /etc/os-release 00:00:54.152 ++ NAME='Fedora Linux' 00:00:54.152 ++ VERSION='39 (Cloud Edition)' 00:00:54.152 ++ ID=fedora 00:00:54.152 ++ VERSION_ID=39 00:00:54.152 ++ VERSION_CODENAME= 00:00:54.152 ++ PLATFORM_ID=platform:f39 00:00:54.152 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:54.152 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:54.152 ++ LOGO=fedora-logo-icon 00:00:54.152 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:54.152 ++ HOME_URL=https://fedoraproject.org/ 00:00:54.152 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:54.152 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:54.152 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:54.152 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:54.152 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:54.152 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:54.152 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:54.152 ++ SUPPORT_END=2024-11-12 00:00:54.152 ++ VARIANT='Cloud Edition' 00:00:54.152 ++ VARIANT_ID=cloud 00:00:54.152 + uname -a 00:00:54.152 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:54.152 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:56.693 Hugepages 00:00:56.693 node hugesize free / total 00:00:56.693 node0 1048576kB 0 / 0 00:00:56.693 node0 2048kB 1024 / 1024 00:00:56.693 node1 1048576kB 0 / 0 00:00:56.693 node1 2048kB 1024 / 1024 00:00:56.693 00:00:56.693 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:56.693 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:56.693 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:56.693 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:56.693 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:56.693 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:56.693 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:56.693 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:56.693 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:56.693 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:56.693 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:56.693 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:56.693 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:56.693 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:56.693 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:56.693 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:56.693 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:56.693 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:56.693 + rm -f /tmp/spdk-ld-path 00:00:56.693 + source autorun-spdk.conf 00:00:56.693 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.693 ++ SPDK_TEST_NVMF=1 00:00:56.693 ++ SPDK_TEST_NVME_CLI=1 00:00:56.693 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.693 ++ SPDK_TEST_NVMF_NICS=e810 00:00:56.693 ++ SPDK_TEST_VFIOUSER=1 00:00:56.693 ++ SPDK_RUN_UBSAN=1 00:00:56.693 ++ NET_TYPE=phy 00:00:56.693 ++ RUN_NIGHTLY=0 00:00:56.693 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:56.693 + [[ -n '' ]] 00:00:56.693 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:56.693 + for M in /var/spdk/build-*-manifest.txt 00:00:56.693 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:56.693 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:56.693 + for M in /var/spdk/build-*-manifest.txt 00:00:56.693 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:56.693 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:56.693 + for M in /var/spdk/build-*-manifest.txt 00:00:56.693 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:56.693 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:56.693 ++ uname 00:00:56.693 + [[ Linux == \L\i\n\u\x ]] 00:00:56.693 + sudo dmesg -T 00:00:56.953 + sudo dmesg --clear 00:00:56.953 + dmesg_pid=2047428 00:00:56.953 + [[ Fedora Linux == FreeBSD ]] 00:00:56.953 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:56.953 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:56.953 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:56.953 + [[ -x /usr/src/fio-static/fio ]] 00:00:56.953 + export FIO_BIN=/usr/src/fio-static/fio 00:00:56.953 + FIO_BIN=/usr/src/fio-static/fio 00:00:56.953 + sudo dmesg -Tw 00:00:56.953 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:56.953 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:56.953 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:56.953 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:56.953 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:56.953 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:56.953 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:56.953 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:56.953 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.953 12:43:54 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:00:56.953 12:43:54 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.953 12:43:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.953 12:43:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:56.953 12:43:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:56.953 12:43:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.953 12:43:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:56.953 12:43:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:56.954 12:43:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:56.954 12:43:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:56.954 12:43:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:56.954 12:43:54 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:56.954 12:43:54 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.954 12:43:54 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:00:56.954 12:43:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:56.954 12:43:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:56.954 12:43:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:56.954 12:43:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:56.954 12:43:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:56.954 12:43:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.954 12:43:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.954 12:43:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.954 12:43:54 -- paths/export.sh@5 -- $ export PATH 00:00:56.954 12:43:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.954 12:43:54 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:56.954 12:43:54 -- common/autobuild_common.sh@486 -- $ date +%s 00:00:56.954 12:43:54 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731930234.XXXXXX 00:00:56.954 12:43:54 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731930234.dp4sSd 00:00:56.954 12:43:54 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:00:56.954 12:43:54 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:00:56.954 12:43:54 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:56.954 12:43:54 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:56.954 12:43:54 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:56.954 12:43:54 -- common/autobuild_common.sh@502 -- $ get_config_params 00:00:56.954 12:43:54 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:00:56.954 12:43:54 -- common/autotest_common.sh@10 -- $ set +x 00:00:56.954 12:43:54 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:56.954 12:43:54 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:00:56.954 12:43:54 -- pm/common@17 -- $ local monitor 00:00:56.954 12:43:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.954 12:43:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.954 12:43:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.954 12:43:54 -- pm/common@21 -- $ date +%s 00:00:56.954 12:43:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.954 12:43:54 -- pm/common@21 -- $ date +%s 00:00:56.954 12:43:54 -- pm/common@25 -- $ sleep 1 00:00:56.954 12:43:54 -- pm/common@21 -- $ date +%s 00:00:56.954 12:43:54 -- pm/common@21 -- $ date +%s 00:00:56.954 12:43:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731930234 00:00:56.954 12:43:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731930234 00:00:56.954 12:43:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731930234 00:00:56.954 12:43:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731930234 00:00:57.213 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731930234_collect-cpu-load.pm.log 00:00:57.213 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731930234_collect-vmstat.pm.log 00:00:57.213 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731930234_collect-cpu-temp.pm.log 00:00:57.213 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731930234_collect-bmc-pm.bmc.pm.log 00:00:58.150 12:43:55 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:00:58.150 12:43:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:58.150 12:43:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:58.150 12:43:55 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:58.150 12:43:55 -- spdk/autobuild.sh@16 -- $ date -u 00:00:58.150 Mon Nov 18 11:43:55 AM UTC 2024 00:00:58.150 12:43:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:58.150 v25.01-pre-159-g403bf887a 00:00:58.150 12:43:55 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:58.150 12:43:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:58.150 12:43:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:58.150 12:43:55 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:00:58.150 12:43:55 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:00:58.150 12:43:55 -- common/autotest_common.sh@10 -- $ set +x 00:00:58.151 ************************************ 00:00:58.151 START TEST ubsan 00:00:58.151 ************************************ 00:00:58.151 12:43:55 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:00:58.151 using ubsan 00:00:58.151 00:00:58.151 real 0m0.000s 00:00:58.151 user 0m0.000s 00:00:58.151 sys 0m0.000s 00:00:58.151 12:43:55 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:00:58.151 12:43:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:58.151 ************************************ 00:00:58.151 END TEST ubsan 00:00:58.151 ************************************ 00:00:58.151 12:43:55 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:58.151 12:43:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:58.151 12:43:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:58.151 12:43:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:58.151 12:43:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:58.151 12:43:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:58.151 12:43:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:58.151 12:43:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:58.151 12:43:55 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:58.409 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:58.409 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:58.668 Using 'verbs' RDMA provider 00:01:11.829 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:24.050 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:24.050 Creating mk/config.mk...done. 00:01:24.050 Creating mk/cc.flags.mk...done. 00:01:24.050 Type 'make' to build. 00:01:24.050 12:44:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:24.050 12:44:21 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:24.050 12:44:21 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:24.050 12:44:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.050 ************************************ 00:01:24.050 START TEST make 00:01:24.050 ************************************ 00:01:24.050 12:44:21 make -- common/autotest_common.sh@1127 -- $ make -j96 00:01:24.309 make[1]: Nothing to be done for 'all'. 00:01:25.689 The Meson build system 00:01:25.689 Version: 1.5.0 00:01:25.689 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:25.689 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:25.689 Build type: native build 00:01:25.689 Project name: libvfio-user 00:01:25.689 Project version: 0.0.1 00:01:25.689 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:25.689 C linker for the host machine: cc ld.bfd 2.40-14 00:01:25.689 Host machine cpu family: x86_64 00:01:25.689 Host machine cpu: x86_64 00:01:25.689 Run-time dependency threads found: YES 00:01:25.689 Library dl found: YES 00:01:25.689 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:25.689 Run-time dependency json-c found: YES 0.17 00:01:25.689 Run-time dependency cmocka found: YES 1.1.7 00:01:25.689 Program pytest-3 found: NO 00:01:25.689 Program flake8 found: NO 00:01:25.689 Program misspell-fixer found: NO 00:01:25.689 Program restructuredtext-lint found: NO 00:01:25.689 Program valgrind found: YES (/usr/bin/valgrind) 00:01:25.689 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:25.690 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:25.690 Compiler for C supports arguments -Wwrite-strings: YES 00:01:25.690 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:25.690 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:25.690 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:25.690 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:25.690 Build targets in project: 8 00:01:25.690 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:25.690 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:25.690 00:01:25.690 libvfio-user 0.0.1 00:01:25.690 00:01:25.690 User defined options 00:01:25.690 buildtype : debug 00:01:25.690 default_library: shared 00:01:25.690 libdir : /usr/local/lib 00:01:25.690 00:01:25.690 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:26.257 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:26.257 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:26.258 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:26.258 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:26.258 [4/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:26.258 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:26.258 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:26.258 [7/37] Compiling C object samples/null.p/null.c.o 00:01:26.258 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:26.258 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:26.258 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:26.258 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:26.258 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:26.258 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:26.258 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:26.258 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:26.258 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:26.258 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:26.258 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:26.258 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:26.258 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:26.258 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:26.258 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:26.258 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:26.258 [24/37] Compiling C object samples/server.p/server.c.o 00:01:26.258 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:26.258 [26/37] Compiling C object samples/client.p/client.c.o 00:01:26.258 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:26.258 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:26.258 [29/37] Linking target samples/client 00:01:26.258 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:26.258 [31/37] Linking target test/unit_tests 00:01:26.517 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:26.517 [33/37] Linking target samples/null 00:01:26.517 [34/37] Linking target samples/gpio-pci-idio-16 00:01:26.517 [35/37] Linking target samples/server 00:01:26.517 [36/37] Linking target samples/lspci 00:01:26.517 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:26.517 INFO: autodetecting backend as ninja 00:01:26.517 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:26.517 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:27.086 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:27.086 ninja: no work to do. 00:01:32.371 The Meson build system 00:01:32.371 Version: 1.5.0 00:01:32.371 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:32.371 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:32.371 Build type: native build 00:01:32.371 Program cat found: YES (/usr/bin/cat) 00:01:32.371 Project name: DPDK 00:01:32.371 Project version: 24.03.0 00:01:32.371 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:32.371 C linker for the host machine: cc ld.bfd 2.40-14 00:01:32.371 Host machine cpu family: x86_64 00:01:32.371 Host machine cpu: x86_64 00:01:32.371 Message: ## Building in Developer Mode ## 00:01:32.371 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:32.371 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:32.371 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:32.371 Program python3 found: YES (/usr/bin/python3) 00:01:32.371 Program cat found: YES (/usr/bin/cat) 00:01:32.371 Compiler for C supports arguments -march=native: YES 00:01:32.371 Checking for size of "void *" : 8 00:01:32.371 Checking for size of "void *" : 8 (cached) 00:01:32.371 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:32.371 Library m found: YES 00:01:32.371 Library numa found: YES 00:01:32.371 Has header "numaif.h" : YES 00:01:32.371 Library fdt found: NO 00:01:32.371 Library execinfo found: NO 00:01:32.371 Has header "execinfo.h" : YES 00:01:32.371 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:32.371 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:32.371 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:32.371 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:32.371 Run-time dependency openssl found: YES 3.1.1 00:01:32.371 Run-time dependency libpcap found: YES 1.10.4 00:01:32.371 Has header "pcap.h" with dependency libpcap: YES 00:01:32.371 Compiler for C supports arguments -Wcast-qual: YES 00:01:32.371 Compiler for C supports arguments -Wdeprecated: YES 00:01:32.371 Compiler for C supports arguments -Wformat: YES 00:01:32.371 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:32.371 Compiler for C supports arguments -Wformat-security: NO 00:01:32.371 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:32.371 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:32.371 Compiler for C supports arguments -Wnested-externs: YES 00:01:32.371 Compiler for C supports arguments -Wold-style-definition: YES 00:01:32.371 Compiler for C supports arguments -Wpointer-arith: YES 00:01:32.371 Compiler for C supports arguments -Wsign-compare: YES 00:01:32.371 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:32.371 Compiler for C supports arguments -Wundef: YES 00:01:32.371 Compiler for C supports arguments -Wwrite-strings: YES 00:01:32.371 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:32.371 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:32.371 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:32.371 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:32.371 Program objdump found: YES (/usr/bin/objdump) 00:01:32.371 Compiler for C supports arguments -mavx512f: YES 00:01:32.371 Checking if "AVX512 checking" compiles: YES 00:01:32.371 Fetching value of define "__SSE4_2__" : 1 00:01:32.371 Fetching value of define "__AES__" : 1 00:01:32.371 Fetching value of define "__AVX__" : 1 00:01:32.371 Fetching value of define "__AVX2__" : 1 00:01:32.371 Fetching value of define "__AVX512BW__" : 1 00:01:32.371 Fetching value of define "__AVX512CD__" : 1 00:01:32.371 Fetching value of define "__AVX512DQ__" : 1 00:01:32.372 Fetching value of define "__AVX512F__" : 1 00:01:32.372 Fetching value of define "__AVX512VL__" : 1 00:01:32.372 Fetching value of define "__PCLMUL__" : 1 00:01:32.372 Fetching value of define "__RDRND__" : 1 00:01:32.372 Fetching value of define "__RDSEED__" : 1 00:01:32.372 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:32.372 Fetching value of define "__znver1__" : (undefined) 00:01:32.372 Fetching value of define "__znver2__" : (undefined) 00:01:32.372 Fetching value of define "__znver3__" : (undefined) 00:01:32.372 Fetching value of define "__znver4__" : (undefined) 00:01:32.372 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:32.372 Message: lib/log: Defining dependency "log" 00:01:32.372 Message: lib/kvargs: Defining dependency "kvargs" 00:01:32.372 Message: lib/telemetry: Defining dependency "telemetry" 00:01:32.372 Checking for function "getentropy" : NO 00:01:32.372 Message: lib/eal: Defining dependency "eal" 00:01:32.372 Message: lib/ring: Defining dependency "ring" 00:01:32.372 Message: lib/rcu: Defining dependency "rcu" 00:01:32.372 Message: lib/mempool: Defining dependency "mempool" 00:01:32.372 Message: lib/mbuf: Defining dependency "mbuf" 00:01:32.372 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:32.372 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:32.372 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:32.372 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:32.372 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:32.372 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:32.372 Compiler for C supports arguments -mpclmul: YES 00:01:32.372 Compiler for C supports arguments -maes: YES 00:01:32.372 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:32.372 Compiler for C supports arguments -mavx512bw: YES 00:01:32.372 Compiler for C supports arguments -mavx512dq: YES 00:01:32.372 Compiler for C supports arguments -mavx512vl: YES 00:01:32.372 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:32.372 Compiler for C supports arguments -mavx2: YES 00:01:32.372 Compiler for C supports arguments -mavx: YES 00:01:32.372 Message: lib/net: Defining dependency "net" 00:01:32.372 Message: lib/meter: Defining dependency "meter" 00:01:32.372 Message: lib/ethdev: Defining dependency "ethdev" 00:01:32.372 Message: lib/pci: Defining dependency "pci" 00:01:32.372 Message: lib/cmdline: Defining dependency "cmdline" 00:01:32.372 Message: lib/hash: Defining dependency "hash" 00:01:32.372 Message: lib/timer: Defining dependency "timer" 00:01:32.372 Message: lib/compressdev: Defining dependency "compressdev" 00:01:32.372 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:32.372 Message: lib/dmadev: Defining dependency "dmadev" 00:01:32.372 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:32.372 Message: lib/power: Defining dependency "power" 00:01:32.372 Message: lib/reorder: Defining dependency "reorder" 00:01:32.372 Message: lib/security: Defining dependency "security" 00:01:32.372 Has header "linux/userfaultfd.h" : YES 00:01:32.372 Has header "linux/vduse.h" : YES 00:01:32.372 Message: lib/vhost: Defining dependency "vhost" 00:01:32.372 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:32.372 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:32.372 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:32.372 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:32.372 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:32.372 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:32.372 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:32.372 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:32.372 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:32.372 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:32.372 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:32.372 Configuring doxy-api-html.conf using configuration 00:01:32.372 Configuring doxy-api-man.conf using configuration 00:01:32.372 Program mandb found: YES (/usr/bin/mandb) 00:01:32.372 Program sphinx-build found: NO 00:01:32.372 Configuring rte_build_config.h using configuration 00:01:32.372 Message: 00:01:32.372 ================= 00:01:32.372 Applications Enabled 00:01:32.372 ================= 00:01:32.372 00:01:32.372 apps: 00:01:32.372 00:01:32.372 00:01:32.372 Message: 00:01:32.372 ================= 00:01:32.372 Libraries Enabled 00:01:32.372 ================= 00:01:32.372 00:01:32.372 libs: 00:01:32.372 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:32.372 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:32.372 cryptodev, dmadev, power, reorder, security, vhost, 00:01:32.372 00:01:32.372 Message: 00:01:32.372 =============== 00:01:32.372 Drivers Enabled 00:01:32.372 =============== 00:01:32.372 00:01:32.372 common: 00:01:32.372 00:01:32.372 bus: 00:01:32.372 pci, vdev, 00:01:32.372 mempool: 00:01:32.372 ring, 00:01:32.372 dma: 00:01:32.372 00:01:32.372 net: 00:01:32.372 00:01:32.372 crypto: 00:01:32.372 00:01:32.372 compress: 00:01:32.372 00:01:32.372 vdpa: 00:01:32.372 00:01:32.372 00:01:32.372 Message: 00:01:32.372 ================= 00:01:32.372 Content Skipped 00:01:32.372 ================= 00:01:32.372 00:01:32.372 apps: 00:01:32.372 dumpcap: explicitly disabled via build config 00:01:32.372 graph: explicitly disabled via build config 00:01:32.372 pdump: explicitly disabled via build config 00:01:32.372 proc-info: explicitly disabled via build config 00:01:32.372 test-acl: explicitly disabled via build config 00:01:32.372 test-bbdev: explicitly disabled via build config 00:01:32.372 test-cmdline: explicitly disabled via build config 00:01:32.372 test-compress-perf: explicitly disabled via build config 00:01:32.372 test-crypto-perf: explicitly disabled via build config 00:01:32.372 test-dma-perf: explicitly disabled via build config 00:01:32.372 test-eventdev: explicitly disabled via build config 00:01:32.372 test-fib: explicitly disabled via build config 00:01:32.372 test-flow-perf: explicitly disabled via build config 00:01:32.372 test-gpudev: explicitly disabled via build config 00:01:32.372 test-mldev: explicitly disabled via build config 00:01:32.372 test-pipeline: explicitly disabled via build config 00:01:32.372 test-pmd: explicitly disabled via build config 00:01:32.372 test-regex: explicitly disabled via build config 00:01:32.372 test-sad: explicitly disabled via build config 00:01:32.372 test-security-perf: explicitly disabled via build config 00:01:32.372 00:01:32.372 libs: 00:01:32.372 argparse: explicitly disabled via build config 00:01:32.372 metrics: explicitly disabled via build config 00:01:32.372 acl: explicitly disabled via build config 00:01:32.372 bbdev: explicitly disabled via build config 00:01:32.372 bitratestats: explicitly disabled via build config 00:01:32.372 bpf: explicitly disabled via build config 00:01:32.372 cfgfile: explicitly disabled via build config 00:01:32.372 distributor: explicitly disabled via build config 00:01:32.372 efd: explicitly disabled via build config 00:01:32.372 eventdev: explicitly disabled via build config 00:01:32.372 dispatcher: explicitly disabled via build config 00:01:32.372 gpudev: explicitly disabled via build config 00:01:32.372 gro: explicitly disabled via build config 00:01:32.372 gso: explicitly disabled via build config 00:01:32.372 ip_frag: explicitly disabled via build config 00:01:32.372 jobstats: explicitly disabled via build config 00:01:32.372 latencystats: explicitly disabled via build config 00:01:32.372 lpm: explicitly disabled via build config 00:01:32.372 member: explicitly disabled via build config 00:01:32.372 pcapng: explicitly disabled via build config 00:01:32.372 rawdev: explicitly disabled via build config 00:01:32.372 regexdev: explicitly disabled via build config 00:01:32.372 mldev: explicitly disabled via build config 00:01:32.372 rib: explicitly disabled via build config 00:01:32.372 sched: explicitly disabled via build config 00:01:32.372 stack: explicitly disabled via build config 00:01:32.372 ipsec: explicitly disabled via build config 00:01:32.372 pdcp: explicitly disabled via build config 00:01:32.372 fib: explicitly disabled via build config 00:01:32.372 port: explicitly disabled via build config 00:01:32.372 pdump: explicitly disabled via build config 00:01:32.372 table: explicitly disabled via build config 00:01:32.372 pipeline: explicitly disabled via build config 00:01:32.372 graph: explicitly disabled via build config 00:01:32.372 node: explicitly disabled via build config 00:01:32.372 00:01:32.372 drivers: 00:01:32.372 common/cpt: not in enabled drivers build config 00:01:32.372 common/dpaax: not in enabled drivers build config 00:01:32.372 common/iavf: not in enabled drivers build config 00:01:32.372 common/idpf: not in enabled drivers build config 00:01:32.372 common/ionic: not in enabled drivers build config 00:01:32.372 common/mvep: not in enabled drivers build config 00:01:32.372 common/octeontx: not in enabled drivers build config 00:01:32.372 bus/auxiliary: not in enabled drivers build config 00:01:32.372 bus/cdx: not in enabled drivers build config 00:01:32.372 bus/dpaa: not in enabled drivers build config 00:01:32.372 bus/fslmc: not in enabled drivers build config 00:01:32.372 bus/ifpga: not in enabled drivers build config 00:01:32.372 bus/platform: not in enabled drivers build config 00:01:32.372 bus/uacce: not in enabled drivers build config 00:01:32.372 bus/vmbus: not in enabled drivers build config 00:01:32.372 common/cnxk: not in enabled drivers build config 00:01:32.372 common/mlx5: not in enabled drivers build config 00:01:32.372 common/nfp: not in enabled drivers build config 00:01:32.372 common/nitrox: not in enabled drivers build config 00:01:32.372 common/qat: not in enabled drivers build config 00:01:32.372 common/sfc_efx: not in enabled drivers build config 00:01:32.372 mempool/bucket: not in enabled drivers build config 00:01:32.372 mempool/cnxk: not in enabled drivers build config 00:01:32.372 mempool/dpaa: not in enabled drivers build config 00:01:32.372 mempool/dpaa2: not in enabled drivers build config 00:01:32.372 mempool/octeontx: not in enabled drivers build config 00:01:32.373 mempool/stack: not in enabled drivers build config 00:01:32.373 dma/cnxk: not in enabled drivers build config 00:01:32.373 dma/dpaa: not in enabled drivers build config 00:01:32.373 dma/dpaa2: not in enabled drivers build config 00:01:32.373 dma/hisilicon: not in enabled drivers build config 00:01:32.373 dma/idxd: not in enabled drivers build config 00:01:32.373 dma/ioat: not in enabled drivers build config 00:01:32.373 dma/skeleton: not in enabled drivers build config 00:01:32.373 net/af_packet: not in enabled drivers build config 00:01:32.373 net/af_xdp: not in enabled drivers build config 00:01:32.373 net/ark: not in enabled drivers build config 00:01:32.373 net/atlantic: not in enabled drivers build config 00:01:32.373 net/avp: not in enabled drivers build config 00:01:32.373 net/axgbe: not in enabled drivers build config 00:01:32.373 net/bnx2x: not in enabled drivers build config 00:01:32.373 net/bnxt: not in enabled drivers build config 00:01:32.373 net/bonding: not in enabled drivers build config 00:01:32.373 net/cnxk: not in enabled drivers build config 00:01:32.373 net/cpfl: not in enabled drivers build config 00:01:32.373 net/cxgbe: not in enabled drivers build config 00:01:32.373 net/dpaa: not in enabled drivers build config 00:01:32.373 net/dpaa2: not in enabled drivers build config 00:01:32.373 net/e1000: not in enabled drivers build config 00:01:32.373 net/ena: not in enabled drivers build config 00:01:32.373 net/enetc: not in enabled drivers build config 00:01:32.373 net/enetfec: not in enabled drivers build config 00:01:32.373 net/enic: not in enabled drivers build config 00:01:32.373 net/failsafe: not in enabled drivers build config 00:01:32.373 net/fm10k: not in enabled drivers build config 00:01:32.373 net/gve: not in enabled drivers build config 00:01:32.373 net/hinic: not in enabled drivers build config 00:01:32.373 net/hns3: not in enabled drivers build config 00:01:32.373 net/i40e: not in enabled drivers build config 00:01:32.373 net/iavf: not in enabled drivers build config 00:01:32.373 net/ice: not in enabled drivers build config 00:01:32.373 net/idpf: not in enabled drivers build config 00:01:32.373 net/igc: not in enabled drivers build config 00:01:32.373 net/ionic: not in enabled drivers build config 00:01:32.373 net/ipn3ke: not in enabled drivers build config 00:01:32.373 net/ixgbe: not in enabled drivers build config 00:01:32.373 net/mana: not in enabled drivers build config 00:01:32.373 net/memif: not in enabled drivers build config 00:01:32.373 net/mlx4: not in enabled drivers build config 00:01:32.373 net/mlx5: not in enabled drivers build config 00:01:32.373 net/mvneta: not in enabled drivers build config 00:01:32.373 net/mvpp2: not in enabled drivers build config 00:01:32.373 net/netvsc: not in enabled drivers build config 00:01:32.373 net/nfb: not in enabled drivers build config 00:01:32.373 net/nfp: not in enabled drivers build config 00:01:32.373 net/ngbe: not in enabled drivers build config 00:01:32.373 net/null: not in enabled drivers build config 00:01:32.373 net/octeontx: not in enabled drivers build config 00:01:32.373 net/octeon_ep: not in enabled drivers build config 00:01:32.373 net/pcap: not in enabled drivers build config 00:01:32.373 net/pfe: not in enabled drivers build config 00:01:32.373 net/qede: not in enabled drivers build config 00:01:32.373 net/ring: not in enabled drivers build config 00:01:32.373 net/sfc: not in enabled drivers build config 00:01:32.373 net/softnic: not in enabled drivers build config 00:01:32.373 net/tap: not in enabled drivers build config 00:01:32.373 net/thunderx: not in enabled drivers build config 00:01:32.373 net/txgbe: not in enabled drivers build config 00:01:32.373 net/vdev_netvsc: not in enabled drivers build config 00:01:32.373 net/vhost: not in enabled drivers build config 00:01:32.373 net/virtio: not in enabled drivers build config 00:01:32.373 net/vmxnet3: not in enabled drivers build config 00:01:32.373 raw/*: missing internal dependency, "rawdev" 00:01:32.373 crypto/armv8: not in enabled drivers build config 00:01:32.373 crypto/bcmfs: not in enabled drivers build config 00:01:32.373 crypto/caam_jr: not in enabled drivers build config 00:01:32.373 crypto/ccp: not in enabled drivers build config 00:01:32.373 crypto/cnxk: not in enabled drivers build config 00:01:32.373 crypto/dpaa_sec: not in enabled drivers build config 00:01:32.373 crypto/dpaa2_sec: not in enabled drivers build config 00:01:32.373 crypto/ipsec_mb: not in enabled drivers build config 00:01:32.373 crypto/mlx5: not in enabled drivers build config 00:01:32.373 crypto/mvsam: not in enabled drivers build config 00:01:32.373 crypto/nitrox: not in enabled drivers build config 00:01:32.373 crypto/null: not in enabled drivers build config 00:01:32.373 crypto/octeontx: not in enabled drivers build config 00:01:32.373 crypto/openssl: not in enabled drivers build config 00:01:32.373 crypto/scheduler: not in enabled drivers build config 00:01:32.373 crypto/uadk: not in enabled drivers build config 00:01:32.373 crypto/virtio: not in enabled drivers build config 00:01:32.373 compress/isal: not in enabled drivers build config 00:01:32.373 compress/mlx5: not in enabled drivers build config 00:01:32.373 compress/nitrox: not in enabled drivers build config 00:01:32.373 compress/octeontx: not in enabled drivers build config 00:01:32.373 compress/zlib: not in enabled drivers build config 00:01:32.373 regex/*: missing internal dependency, "regexdev" 00:01:32.373 ml/*: missing internal dependency, "mldev" 00:01:32.373 vdpa/ifc: not in enabled drivers build config 00:01:32.373 vdpa/mlx5: not in enabled drivers build config 00:01:32.373 vdpa/nfp: not in enabled drivers build config 00:01:32.373 vdpa/sfc: not in enabled drivers build config 00:01:32.373 event/*: missing internal dependency, "eventdev" 00:01:32.373 baseband/*: missing internal dependency, "bbdev" 00:01:32.373 gpu/*: missing internal dependency, "gpudev" 00:01:32.373 00:01:32.373 00:01:32.373 Build targets in project: 85 00:01:32.373 00:01:32.373 DPDK 24.03.0 00:01:32.373 00:01:32.373 User defined options 00:01:32.373 buildtype : debug 00:01:32.373 default_library : shared 00:01:32.373 libdir : lib 00:01:32.373 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:32.373 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:32.373 c_link_args : 00:01:32.373 cpu_instruction_set: native 00:01:32.373 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:32.373 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:32.373 enable_docs : false 00:01:32.373 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:32.373 enable_kmods : false 00:01:32.373 max_lcores : 128 00:01:32.373 tests : false 00:01:32.373 00:01:32.373 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:32.953 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:32.953 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:32.953 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:32.953 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:32.953 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:32.953 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:32.953 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:32.953 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:32.953 [8/268] Linking static target lib/librte_kvargs.a 00:01:32.953 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:32.953 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:32.953 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:32.953 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:32.953 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:32.953 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:32.953 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:33.215 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:33.215 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:33.215 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:33.215 [19/268] Linking static target lib/librte_log.a 00:01:33.215 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:33.215 [21/268] Linking static target lib/librte_pci.a 00:01:33.215 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:33.215 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:33.215 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:33.478 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:33.478 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:33.478 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:33.478 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:33.478 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:33.478 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:33.478 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:33.478 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:33.478 [33/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:33.478 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:33.478 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:33.478 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:33.478 [37/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:33.478 [38/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:33.478 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:33.478 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:33.478 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:33.478 [42/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:33.478 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:33.478 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:33.478 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:33.478 [46/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:33.478 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:33.478 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:33.478 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:33.478 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:33.478 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:33.478 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:33.478 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:33.478 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:33.478 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:33.478 [56/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:33.478 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:33.478 [58/268] Linking static target lib/librte_meter.a 00:01:33.478 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:33.478 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:33.478 [61/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:33.478 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:33.478 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:33.478 [64/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:33.478 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:33.478 [66/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:33.478 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:33.478 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:33.478 [69/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:33.478 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:33.478 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:33.478 [72/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:33.478 [73/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:33.478 [74/268] Linking static target lib/librte_ring.a 00:01:33.478 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:33.478 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:33.478 [77/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:33.478 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:33.478 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:33.478 [80/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:33.478 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:33.478 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:33.478 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:33.478 [84/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:33.478 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:33.478 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:33.478 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:33.478 [88/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:33.478 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:33.740 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:33.740 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:33.740 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:33.740 [93/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:33.740 [94/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.740 [95/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:33.740 [96/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:33.740 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:33.740 [98/268] Linking static target lib/librte_telemetry.a 00:01:33.740 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:33.740 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:33.740 [101/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:33.740 [102/268] Linking static target lib/librte_rcu.a 00:01:33.740 [103/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.740 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:33.740 [105/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:33.740 [106/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:33.740 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:33.740 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:33.740 [109/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:33.740 [110/268] Linking static target lib/librte_net.a 00:01:33.740 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:33.740 [112/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:33.740 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:33.740 [114/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:33.740 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:33.740 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:33.740 [117/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:33.740 [118/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:33.740 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:33.740 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:33.740 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:33.740 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:33.740 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:33.740 [124/268] Linking static target lib/librte_mempool.a 00:01:33.740 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:33.740 [126/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:33.740 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:33.740 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:33.740 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:33.740 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:33.740 [131/268] Linking static target lib/librte_eal.a 00:01:33.740 [132/268] Linking static target lib/librte_cmdline.a 00:01:33.740 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:33.740 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:33.740 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.740 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:34.001 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.001 [138/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:34.001 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:34.001 [140/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.001 [141/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:34.001 [142/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:34.001 [143/268] Linking static target lib/librte_timer.a 00:01:34.001 [144/268] Linking target lib/librte_log.so.24.1 00:01:34.001 [145/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:34.001 [146/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.001 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:34.001 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.001 [149/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:34.001 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:34.002 [151/268] Linking static target lib/librte_mbuf.a 00:01:34.002 [152/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:34.002 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:34.002 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:34.002 [155/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:34.002 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:34.002 [157/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:34.002 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:34.002 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:34.002 [160/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:34.002 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:34.002 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:34.002 [163/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:34.002 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:34.002 [165/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:34.002 [166/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:34.002 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:34.002 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:34.002 [169/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:34.002 [170/268] Linking static target lib/librte_compressdev.a 00:01:34.002 [171/268] Linking static target lib/librte_security.a 00:01:34.002 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:34.002 [173/268] Linking static target lib/librte_reorder.a 00:01:34.002 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:34.002 [175/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:34.002 [176/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:34.002 [177/268] Linking target lib/librte_kvargs.so.24.1 00:01:34.002 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:34.002 [179/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:34.263 [180/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.263 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:34.263 [182/268] Linking static target lib/librte_power.a 00:01:34.263 [183/268] Linking static target lib/librte_dmadev.a 00:01:34.263 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:34.263 [185/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:34.263 [186/268] Linking target lib/librte_telemetry.so.24.1 00:01:34.263 [187/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:34.263 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:34.263 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:34.263 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:34.263 [191/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:34.263 [192/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:34.263 [193/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:34.263 [194/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:34.263 [195/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:34.263 [196/268] Linking static target drivers/librte_bus_vdev.a 00:01:34.263 [197/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:34.263 [198/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:34.263 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:34.263 [200/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:34.263 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:34.263 [202/268] Linking static target lib/librte_hash.a 00:01:34.524 [203/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:34.524 [204/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.524 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:34.524 [206/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:34.524 [207/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:34.524 [208/268] Linking static target drivers/librte_mempool_ring.a 00:01:34.524 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:34.524 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:34.524 [211/268] Linking static target drivers/librte_bus_pci.a 00:01:34.524 [212/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:34.524 [213/268] Linking static target lib/librte_cryptodev.a 00:01:34.524 [214/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.524 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.524 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.784 [217/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.784 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.784 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:34.784 [220/268] Linking static target lib/librte_ethdev.a 00:01:34.784 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.784 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.784 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.044 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:35.044 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.304 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.304 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.245 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:36.245 [229/268] Linking static target lib/librte_vhost.a 00:01:36.506 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.936 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.220 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.793 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.793 [234/268] Linking target lib/librte_eal.so.24.1 00:01:44.053 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:44.053 [236/268] Linking target lib/librte_pci.so.24.1 00:01:44.053 [237/268] Linking target lib/librte_meter.so.24.1 00:01:44.053 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:44.053 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:44.053 [240/268] Linking target lib/librte_ring.so.24.1 00:01:44.053 [241/268] Linking target lib/librte_timer.so.24.1 00:01:44.312 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:44.312 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:44.312 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:44.312 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:44.312 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:44.312 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:44.312 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:44.312 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:44.312 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:44.312 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:44.312 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:44.312 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:44.581 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:44.581 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:44.581 [256/268] Linking target lib/librte_net.so.24.1 00:01:44.581 [257/268] Linking target lib/librte_compressdev.so.24.1 00:01:44.581 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:44.843 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:44.843 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:44.843 [261/268] Linking target lib/librte_hash.so.24.1 00:01:44.843 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:44.843 [263/268] Linking target lib/librte_security.so.24.1 00:01:44.843 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:44.843 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:44.843 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:45.104 [267/268] Linking target lib/librte_power.so.24.1 00:01:45.104 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:45.104 INFO: autodetecting backend as ninja 00:01:45.104 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:55.100 CC lib/log/log.o 00:01:55.100 CC lib/log/log_flags.o 00:01:55.100 CC lib/log/log_deprecated.o 00:01:55.100 CC lib/ut_mock/mock.o 00:01:55.100 CC lib/ut/ut.o 00:01:55.100 LIB libspdk_ut_mock.a 00:01:55.100 LIB libspdk_log.a 00:01:55.100 LIB libspdk_ut.a 00:01:55.100 SO libspdk_ut_mock.so.6.0 00:01:55.100 SO libspdk_ut.so.2.0 00:01:55.100 SO libspdk_log.so.7.1 00:01:55.361 SYMLINK libspdk_ut_mock.so 00:01:55.361 SYMLINK libspdk_ut.so 00:01:55.361 SYMLINK libspdk_log.so 00:01:55.621 CXX lib/trace_parser/trace.o 00:01:55.621 CC lib/util/base64.o 00:01:55.621 CC lib/util/bit_array.o 00:01:55.621 CC lib/util/cpuset.o 00:01:55.621 CC lib/dma/dma.o 00:01:55.621 CC lib/util/crc16.o 00:01:55.621 CC lib/util/crc32.o 00:01:55.621 CC lib/util/crc32c.o 00:01:55.621 CC lib/util/crc32_ieee.o 00:01:55.621 CC lib/util/crc64.o 00:01:55.621 CC lib/util/dif.o 00:01:55.621 CC lib/util/fd.o 00:01:55.621 CC lib/ioat/ioat.o 00:01:55.621 CC lib/util/fd_group.o 00:01:55.621 CC lib/util/file.o 00:01:55.621 CC lib/util/hexlify.o 00:01:55.621 CC lib/util/iov.o 00:01:55.621 CC lib/util/math.o 00:01:55.621 CC lib/util/net.o 00:01:55.621 CC lib/util/pipe.o 00:01:55.621 CC lib/util/strerror_tls.o 00:01:55.621 CC lib/util/string.o 00:01:55.621 CC lib/util/uuid.o 00:01:55.621 CC lib/util/xor.o 00:01:55.621 CC lib/util/zipf.o 00:01:55.621 CC lib/util/md5.o 00:01:55.882 CC lib/vfio_user/host/vfio_user.o 00:01:55.882 CC lib/vfio_user/host/vfio_user_pci.o 00:01:55.882 LIB libspdk_dma.a 00:01:55.882 SO libspdk_dma.so.5.0 00:01:55.882 LIB libspdk_ioat.a 00:01:55.882 SYMLINK libspdk_dma.so 00:01:55.882 SO libspdk_ioat.so.7.0 00:01:55.882 SYMLINK libspdk_ioat.so 00:01:55.882 LIB libspdk_vfio_user.a 00:01:55.882 SO libspdk_vfio_user.so.5.0 00:01:56.142 LIB libspdk_util.a 00:01:56.142 SYMLINK libspdk_vfio_user.so 00:01:56.142 SO libspdk_util.so.10.1 00:01:56.142 SYMLINK libspdk_util.so 00:01:56.402 LIB libspdk_trace_parser.a 00:01:56.402 SO libspdk_trace_parser.so.6.0 00:01:56.402 SYMLINK libspdk_trace_parser.so 00:01:56.402 CC lib/idxd/idxd.o 00:01:56.402 CC lib/idxd/idxd_user.o 00:01:56.402 CC lib/idxd/idxd_kernel.o 00:01:56.402 CC lib/rdma_provider/common.o 00:01:56.402 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:56.662 CC lib/rdma_utils/rdma_utils.o 00:01:56.662 CC lib/json/json_parse.o 00:01:56.662 CC lib/json/json_util.o 00:01:56.662 CC lib/json/json_write.o 00:01:56.662 CC lib/conf/conf.o 00:01:56.662 CC lib/env_dpdk/env.o 00:01:56.662 CC lib/env_dpdk/memory.o 00:01:56.662 CC lib/vmd/vmd.o 00:01:56.662 CC lib/env_dpdk/pci.o 00:01:56.662 CC lib/vmd/led.o 00:01:56.662 CC lib/env_dpdk/init.o 00:01:56.662 CC lib/env_dpdk/threads.o 00:01:56.662 CC lib/env_dpdk/pci_ioat.o 00:01:56.662 CC lib/env_dpdk/pci_virtio.o 00:01:56.662 CC lib/env_dpdk/pci_vmd.o 00:01:56.662 CC lib/env_dpdk/pci_idxd.o 00:01:56.662 CC lib/env_dpdk/pci_event.o 00:01:56.662 CC lib/env_dpdk/sigbus_handler.o 00:01:56.662 CC lib/env_dpdk/pci_dpdk.o 00:01:56.662 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:56.662 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:56.662 LIB libspdk_rdma_provider.a 00:01:56.662 SO libspdk_rdma_provider.so.6.0 00:01:56.662 LIB libspdk_conf.a 00:01:56.921 LIB libspdk_json.a 00:01:56.921 SO libspdk_conf.so.6.0 00:01:56.921 LIB libspdk_rdma_utils.a 00:01:56.921 SYMLINK libspdk_rdma_provider.so 00:01:56.921 SO libspdk_json.so.6.0 00:01:56.921 SO libspdk_rdma_utils.so.1.0 00:01:56.921 SYMLINK libspdk_conf.so 00:01:56.921 SYMLINK libspdk_json.so 00:01:56.921 SYMLINK libspdk_rdma_utils.so 00:01:56.921 LIB libspdk_idxd.a 00:01:56.921 SO libspdk_idxd.so.12.1 00:01:57.181 LIB libspdk_vmd.a 00:01:57.181 SYMLINK libspdk_idxd.so 00:01:57.181 SO libspdk_vmd.so.6.0 00:01:57.181 SYMLINK libspdk_vmd.so 00:01:57.181 CC lib/jsonrpc/jsonrpc_server.o 00:01:57.181 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:57.181 CC lib/jsonrpc/jsonrpc_client.o 00:01:57.181 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:57.441 LIB libspdk_jsonrpc.a 00:01:57.441 SO libspdk_jsonrpc.so.6.0 00:01:57.441 SYMLINK libspdk_jsonrpc.so 00:01:57.702 LIB libspdk_env_dpdk.a 00:01:57.702 SO libspdk_env_dpdk.so.15.1 00:01:57.702 SYMLINK libspdk_env_dpdk.so 00:01:57.702 CC lib/rpc/rpc.o 00:01:57.962 LIB libspdk_rpc.a 00:01:57.962 SO libspdk_rpc.so.6.0 00:01:58.222 SYMLINK libspdk_rpc.so 00:01:58.482 CC lib/notify/notify.o 00:01:58.482 CC lib/notify/notify_rpc.o 00:01:58.482 CC lib/trace/trace.o 00:01:58.482 CC lib/trace/trace_flags.o 00:01:58.482 CC lib/trace/trace_rpc.o 00:01:58.482 CC lib/keyring/keyring.o 00:01:58.482 CC lib/keyring/keyring_rpc.o 00:01:58.482 LIB libspdk_notify.a 00:01:58.482 SO libspdk_notify.so.6.0 00:01:58.743 LIB libspdk_keyring.a 00:01:58.743 LIB libspdk_trace.a 00:01:58.743 SYMLINK libspdk_notify.so 00:01:58.743 SO libspdk_keyring.so.2.0 00:01:58.743 SO libspdk_trace.so.11.0 00:01:58.743 SYMLINK libspdk_keyring.so 00:01:58.743 SYMLINK libspdk_trace.so 00:01:59.004 CC lib/thread/thread.o 00:01:59.004 CC lib/thread/iobuf.o 00:01:59.004 CC lib/sock/sock.o 00:01:59.004 CC lib/sock/sock_rpc.o 00:01:59.264 LIB libspdk_sock.a 00:01:59.524 SO libspdk_sock.so.10.0 00:01:59.524 SYMLINK libspdk_sock.so 00:01:59.783 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:59.783 CC lib/nvme/nvme_ctrlr.o 00:01:59.783 CC lib/nvme/nvme_fabric.o 00:01:59.783 CC lib/nvme/nvme_ns_cmd.o 00:01:59.783 CC lib/nvme/nvme_ns.o 00:01:59.783 CC lib/nvme/nvme_pcie_common.o 00:01:59.783 CC lib/nvme/nvme_pcie.o 00:01:59.783 CC lib/nvme/nvme_qpair.o 00:01:59.783 CC lib/nvme/nvme.o 00:01:59.783 CC lib/nvme/nvme_quirks.o 00:01:59.783 CC lib/nvme/nvme_transport.o 00:01:59.783 CC lib/nvme/nvme_discovery.o 00:01:59.783 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:59.783 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:59.783 CC lib/nvme/nvme_tcp.o 00:01:59.783 CC lib/nvme/nvme_opal.o 00:01:59.783 CC lib/nvme/nvme_io_msg.o 00:01:59.783 CC lib/nvme/nvme_poll_group.o 00:01:59.783 CC lib/nvme/nvme_zns.o 00:01:59.783 CC lib/nvme/nvme_stubs.o 00:01:59.783 CC lib/nvme/nvme_auth.o 00:01:59.783 CC lib/nvme/nvme_cuse.o 00:01:59.783 CC lib/nvme/nvme_vfio_user.o 00:01:59.783 CC lib/nvme/nvme_rdma.o 00:02:00.042 LIB libspdk_thread.a 00:02:00.301 SO libspdk_thread.so.11.0 00:02:00.301 SYMLINK libspdk_thread.so 00:02:00.560 CC lib/fsdev/fsdev.o 00:02:00.560 CC lib/fsdev/fsdev_rpc.o 00:02:00.560 CC lib/fsdev/fsdev_io.o 00:02:00.560 CC lib/init/json_config.o 00:02:00.560 CC lib/blob/blobstore.o 00:02:00.560 CC lib/blob/request.o 00:02:00.560 CC lib/init/subsystem.o 00:02:00.560 CC lib/init/subsystem_rpc.o 00:02:00.560 CC lib/blob/zeroes.o 00:02:00.560 CC lib/blob/blob_bs_dev.o 00:02:00.560 CC lib/init/rpc.o 00:02:00.560 CC lib/vfu_tgt/tgt_rpc.o 00:02:00.560 CC lib/vfu_tgt/tgt_endpoint.o 00:02:00.560 CC lib/accel/accel.o 00:02:00.560 CC lib/accel/accel_rpc.o 00:02:00.560 CC lib/accel/accel_sw.o 00:02:00.560 CC lib/virtio/virtio.o 00:02:00.560 CC lib/virtio/virtio_vfio_user.o 00:02:00.560 CC lib/virtio/virtio_vhost_user.o 00:02:00.560 CC lib/virtio/virtio_pci.o 00:02:00.819 LIB libspdk_init.a 00:02:00.819 SO libspdk_init.so.6.0 00:02:00.819 LIB libspdk_vfu_tgt.a 00:02:00.819 SO libspdk_vfu_tgt.so.3.0 00:02:00.819 LIB libspdk_virtio.a 00:02:00.819 SYMLINK libspdk_init.so 00:02:00.819 SO libspdk_virtio.so.7.0 00:02:00.819 SYMLINK libspdk_vfu_tgt.so 00:02:00.819 SYMLINK libspdk_virtio.so 00:02:01.079 LIB libspdk_fsdev.a 00:02:01.079 SO libspdk_fsdev.so.2.0 00:02:01.079 SYMLINK libspdk_fsdev.so 00:02:01.079 CC lib/event/app.o 00:02:01.079 CC lib/event/reactor.o 00:02:01.079 CC lib/event/log_rpc.o 00:02:01.079 CC lib/event/app_rpc.o 00:02:01.079 CC lib/event/scheduler_static.o 00:02:01.339 LIB libspdk_accel.a 00:02:01.339 SO libspdk_accel.so.16.0 00:02:01.339 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:01.339 SYMLINK libspdk_accel.so 00:02:01.339 LIB libspdk_nvme.a 00:02:01.599 LIB libspdk_event.a 00:02:01.600 SO libspdk_event.so.14.0 00:02:01.600 SO libspdk_nvme.so.15.0 00:02:01.600 SYMLINK libspdk_event.so 00:02:01.859 CC lib/bdev/bdev.o 00:02:01.859 CC lib/bdev/bdev_rpc.o 00:02:01.859 CC lib/bdev/bdev_zone.o 00:02:01.859 CC lib/bdev/part.o 00:02:01.859 CC lib/bdev/scsi_nvme.o 00:02:01.859 SYMLINK libspdk_nvme.so 00:02:01.859 LIB libspdk_fuse_dispatcher.a 00:02:01.859 SO libspdk_fuse_dispatcher.so.1.0 00:02:02.119 SYMLINK libspdk_fuse_dispatcher.so 00:02:02.688 LIB libspdk_blob.a 00:02:02.688 SO libspdk_blob.so.11.0 00:02:02.946 SYMLINK libspdk_blob.so 00:02:03.205 CC lib/lvol/lvol.o 00:02:03.205 CC lib/blobfs/blobfs.o 00:02:03.205 CC lib/blobfs/tree.o 00:02:03.774 LIB libspdk_bdev.a 00:02:03.774 SO libspdk_bdev.so.17.0 00:02:03.774 LIB libspdk_blobfs.a 00:02:03.774 SYMLINK libspdk_bdev.so 00:02:03.774 SO libspdk_blobfs.so.10.0 00:02:03.774 LIB libspdk_lvol.a 00:02:03.774 SO libspdk_lvol.so.10.0 00:02:03.774 SYMLINK libspdk_blobfs.so 00:02:03.774 SYMLINK libspdk_lvol.so 00:02:04.033 CC lib/ublk/ublk.o 00:02:04.033 CC lib/ublk/ublk_rpc.o 00:02:04.033 CC lib/nbd/nbd.o 00:02:04.033 CC lib/nbd/nbd_rpc.o 00:02:04.033 CC lib/ftl/ftl_core.o 00:02:04.033 CC lib/ftl/ftl_init.o 00:02:04.033 CC lib/scsi/dev.o 00:02:04.033 CC lib/ftl/ftl_layout.o 00:02:04.033 CC lib/ftl/ftl_debug.o 00:02:04.033 CC lib/scsi/lun.o 00:02:04.033 CC lib/ftl/ftl_io.o 00:02:04.033 CC lib/scsi/port.o 00:02:04.033 CC lib/ftl/ftl_sb.o 00:02:04.033 CC lib/scsi/scsi.o 00:02:04.033 CC lib/ftl/ftl_l2p.o 00:02:04.033 CC lib/nvmf/ctrlr.o 00:02:04.033 CC lib/scsi/scsi_bdev.o 00:02:04.033 CC lib/ftl/ftl_l2p_flat.o 00:02:04.033 CC lib/nvmf/ctrlr_discovery.o 00:02:04.033 CC lib/scsi/scsi_pr.o 00:02:04.033 CC lib/ftl/ftl_nv_cache.o 00:02:04.033 CC lib/ftl/ftl_band.o 00:02:04.033 CC lib/scsi/scsi_rpc.o 00:02:04.033 CC lib/nvmf/ctrlr_bdev.o 00:02:04.033 CC lib/nvmf/subsystem.o 00:02:04.033 CC lib/ftl/ftl_band_ops.o 00:02:04.033 CC lib/ftl/ftl_writer.o 00:02:04.033 CC lib/scsi/task.o 00:02:04.033 CC lib/nvmf/nvmf_rpc.o 00:02:04.033 CC lib/nvmf/nvmf.o 00:02:04.033 CC lib/nvmf/transport.o 00:02:04.033 CC lib/ftl/ftl_rq.o 00:02:04.033 CC lib/ftl/ftl_reloc.o 00:02:04.033 CC lib/nvmf/tcp.o 00:02:04.033 CC lib/nvmf/stubs.o 00:02:04.033 CC lib/ftl/ftl_l2p_cache.o 00:02:04.033 CC lib/nvmf/mdns_server.o 00:02:04.033 CC lib/nvmf/vfio_user.o 00:02:04.033 CC lib/ftl/ftl_p2l.o 00:02:04.033 CC lib/nvmf/auth.o 00:02:04.033 CC lib/ftl/mngt/ftl_mngt.o 00:02:04.033 CC lib/nvmf/rdma.o 00:02:04.033 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:04.033 CC lib/ftl/ftl_p2l_log.o 00:02:04.033 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:04.033 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:04.033 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:04.033 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:04.033 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:04.033 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:04.033 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:04.033 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:04.033 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:04.033 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:04.033 CC lib/ftl/utils/ftl_conf.o 00:02:04.033 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:04.033 CC lib/ftl/utils/ftl_md.o 00:02:04.033 CC lib/ftl/utils/ftl_mempool.o 00:02:04.033 CC lib/ftl/utils/ftl_property.o 00:02:04.033 CC lib/ftl/utils/ftl_bitmap.o 00:02:04.033 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:04.033 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:04.033 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:04.033 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:04.033 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:04.033 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:04.033 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:04.033 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:04.033 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:04.033 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:04.033 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:04.033 CC lib/ftl/base/ftl_base_dev.o 00:02:04.033 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:04.033 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:04.033 CC lib/ftl/base/ftl_base_bdev.o 00:02:04.033 CC lib/ftl/ftl_trace.o 00:02:04.601 LIB libspdk_nbd.a 00:02:04.601 SO libspdk_nbd.so.7.0 00:02:04.601 SYMLINK libspdk_nbd.so 00:02:04.601 LIB libspdk_scsi.a 00:02:04.860 SO libspdk_scsi.so.9.0 00:02:04.860 SYMLINK libspdk_scsi.so 00:02:04.860 LIB libspdk_ublk.a 00:02:04.860 SO libspdk_ublk.so.3.0 00:02:04.860 SYMLINK libspdk_ublk.so 00:02:05.119 LIB libspdk_ftl.a 00:02:05.119 CC lib/iscsi/conn.o 00:02:05.119 CC lib/iscsi/init_grp.o 00:02:05.119 CC lib/iscsi/iscsi.o 00:02:05.119 CC lib/iscsi/param.o 00:02:05.119 CC lib/iscsi/portal_grp.o 00:02:05.119 CC lib/iscsi/tgt_node.o 00:02:05.119 CC lib/iscsi/iscsi_rpc.o 00:02:05.119 CC lib/iscsi/iscsi_subsystem.o 00:02:05.119 CC lib/iscsi/task.o 00:02:05.119 CC lib/vhost/vhost.o 00:02:05.119 CC lib/vhost/vhost_rpc.o 00:02:05.119 CC lib/vhost/vhost_scsi.o 00:02:05.119 CC lib/vhost/vhost_blk.o 00:02:05.119 CC lib/vhost/rte_vhost_user.o 00:02:05.119 SO libspdk_ftl.so.9.0 00:02:05.378 SYMLINK libspdk_ftl.so 00:02:05.947 LIB libspdk_nvmf.a 00:02:05.947 SO libspdk_nvmf.so.20.0 00:02:05.947 LIB libspdk_vhost.a 00:02:05.947 SO libspdk_vhost.so.8.0 00:02:06.207 SYMLINK libspdk_nvmf.so 00:02:06.207 SYMLINK libspdk_vhost.so 00:02:06.207 LIB libspdk_iscsi.a 00:02:06.207 SO libspdk_iscsi.so.8.0 00:02:06.207 SYMLINK libspdk_iscsi.so 00:02:06.778 CC module/env_dpdk/env_dpdk_rpc.o 00:02:06.778 CC module/vfu_device/vfu_virtio.o 00:02:06.778 CC module/vfu_device/vfu_virtio_scsi.o 00:02:06.778 CC module/vfu_device/vfu_virtio_blk.o 00:02:06.778 CC module/vfu_device/vfu_virtio_rpc.o 00:02:06.778 CC module/vfu_device/vfu_virtio_fs.o 00:02:07.037 CC module/keyring/linux/keyring_rpc.o 00:02:07.037 CC module/keyring/linux/keyring.o 00:02:07.037 LIB libspdk_env_dpdk_rpc.a 00:02:07.037 CC module/accel/error/accel_error.o 00:02:07.037 CC module/accel/error/accel_error_rpc.o 00:02:07.037 CC module/accel/dsa/accel_dsa.o 00:02:07.037 CC module/accel/dsa/accel_dsa_rpc.o 00:02:07.037 CC module/keyring/file/keyring.o 00:02:07.037 CC module/keyring/file/keyring_rpc.o 00:02:07.037 CC module/accel/ioat/accel_ioat.o 00:02:07.037 CC module/accel/ioat/accel_ioat_rpc.o 00:02:07.037 CC module/accel/iaa/accel_iaa.o 00:02:07.037 CC module/accel/iaa/accel_iaa_rpc.o 00:02:07.037 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:07.037 CC module/scheduler/gscheduler/gscheduler.o 00:02:07.037 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:07.037 CC module/blob/bdev/blob_bdev.o 00:02:07.037 CC module/sock/posix/posix.o 00:02:07.037 SO libspdk_env_dpdk_rpc.so.6.0 00:02:07.037 CC module/fsdev/aio/fsdev_aio.o 00:02:07.037 CC module/fsdev/aio/linux_aio_mgr.o 00:02:07.037 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:07.037 SYMLINK libspdk_env_dpdk_rpc.so 00:02:07.037 LIB libspdk_keyring_linux.a 00:02:07.037 LIB libspdk_accel_error.a 00:02:07.037 SO libspdk_keyring_linux.so.1.0 00:02:07.037 SO libspdk_accel_error.so.2.0 00:02:07.037 LIB libspdk_scheduler_dpdk_governor.a 00:02:07.037 LIB libspdk_scheduler_gscheduler.a 00:02:07.297 LIB libspdk_accel_ioat.a 00:02:07.297 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:07.297 SO libspdk_scheduler_gscheduler.so.4.0 00:02:07.297 LIB libspdk_accel_iaa.a 00:02:07.297 LIB libspdk_keyring_file.a 00:02:07.297 SYMLINK libspdk_keyring_linux.so 00:02:07.297 SYMLINK libspdk_accel_error.so 00:02:07.297 LIB libspdk_scheduler_dynamic.a 00:02:07.297 SO libspdk_accel_ioat.so.6.0 00:02:07.297 SO libspdk_scheduler_dynamic.so.4.0 00:02:07.297 SO libspdk_accel_iaa.so.3.0 00:02:07.297 SO libspdk_keyring_file.so.2.0 00:02:07.297 LIB libspdk_accel_dsa.a 00:02:07.297 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:07.297 SYMLINK libspdk_scheduler_gscheduler.so 00:02:07.297 LIB libspdk_blob_bdev.a 00:02:07.297 SO libspdk_accel_dsa.so.5.0 00:02:07.297 SYMLINK libspdk_accel_ioat.so 00:02:07.297 SYMLINK libspdk_accel_iaa.so 00:02:07.297 SYMLINK libspdk_scheduler_dynamic.so 00:02:07.297 SO libspdk_blob_bdev.so.11.0 00:02:07.297 SYMLINK libspdk_keyring_file.so 00:02:07.297 LIB libspdk_vfu_device.a 00:02:07.297 SYMLINK libspdk_accel_dsa.so 00:02:07.297 SYMLINK libspdk_blob_bdev.so 00:02:07.297 SO libspdk_vfu_device.so.3.0 00:02:07.557 SYMLINK libspdk_vfu_device.so 00:02:07.557 LIB libspdk_fsdev_aio.a 00:02:07.557 LIB libspdk_sock_posix.a 00:02:07.557 SO libspdk_fsdev_aio.so.1.0 00:02:07.557 SO libspdk_sock_posix.so.6.0 00:02:07.557 SYMLINK libspdk_fsdev_aio.so 00:02:07.557 SYMLINK libspdk_sock_posix.so 00:02:07.817 CC module/bdev/nvme/bdev_nvme.o 00:02:07.817 CC module/bdev/malloc/bdev_malloc.o 00:02:07.817 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:07.817 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:07.817 CC module/bdev/nvme/nvme_rpc.o 00:02:07.817 CC module/bdev/nvme/vbdev_opal.o 00:02:07.817 CC module/bdev/nvme/bdev_mdns_client.o 00:02:07.817 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:07.817 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:07.817 CC module/bdev/delay/vbdev_delay.o 00:02:07.817 CC module/bdev/error/vbdev_error.o 00:02:07.817 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:07.817 CC module/bdev/gpt/gpt.o 00:02:07.817 CC module/bdev/error/vbdev_error_rpc.o 00:02:07.817 CC module/bdev/iscsi/bdev_iscsi.o 00:02:07.817 CC module/bdev/ftl/bdev_ftl.o 00:02:07.817 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:07.817 CC module/bdev/gpt/vbdev_gpt.o 00:02:07.817 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:07.817 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:07.817 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:07.817 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:07.817 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:07.817 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:07.817 CC module/blobfs/bdev/blobfs_bdev.o 00:02:07.817 CC module/bdev/null/bdev_null.o 00:02:07.817 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:07.817 CC module/bdev/lvol/vbdev_lvol.o 00:02:07.817 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:07.817 CC module/bdev/split/vbdev_split.o 00:02:07.817 CC module/bdev/null/bdev_null_rpc.o 00:02:07.817 CC module/bdev/raid/bdev_raid.o 00:02:07.817 CC module/bdev/split/vbdev_split_rpc.o 00:02:07.817 CC module/bdev/raid/bdev_raid_rpc.o 00:02:07.817 CC module/bdev/passthru/vbdev_passthru.o 00:02:07.817 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:07.817 CC module/bdev/raid/bdev_raid_sb.o 00:02:07.817 CC module/bdev/raid/raid0.o 00:02:07.817 CC module/bdev/raid/raid1.o 00:02:07.817 CC module/bdev/raid/concat.o 00:02:07.817 CC module/bdev/aio/bdev_aio.o 00:02:07.817 CC module/bdev/aio/bdev_aio_rpc.o 00:02:08.076 LIB libspdk_blobfs_bdev.a 00:02:08.076 LIB libspdk_bdev_error.a 00:02:08.076 SO libspdk_blobfs_bdev.so.6.0 00:02:08.076 LIB libspdk_bdev_split.a 00:02:08.076 SO libspdk_bdev_error.so.6.0 00:02:08.076 SO libspdk_bdev_split.so.6.0 00:02:08.076 LIB libspdk_bdev_passthru.a 00:02:08.076 LIB libspdk_bdev_gpt.a 00:02:08.076 LIB libspdk_bdev_null.a 00:02:08.076 SYMLINK libspdk_blobfs_bdev.so 00:02:08.076 LIB libspdk_bdev_ftl.a 00:02:08.336 LIB libspdk_bdev_malloc.a 00:02:08.336 SO libspdk_bdev_passthru.so.6.0 00:02:08.336 LIB libspdk_bdev_zone_block.a 00:02:08.336 SO libspdk_bdev_gpt.so.6.0 00:02:08.336 SO libspdk_bdev_null.so.6.0 00:02:08.336 SYMLINK libspdk_bdev_error.so 00:02:08.336 SYMLINK libspdk_bdev_split.so 00:02:08.336 SO libspdk_bdev_malloc.so.6.0 00:02:08.336 SO libspdk_bdev_ftl.so.6.0 00:02:08.336 LIB libspdk_bdev_aio.a 00:02:08.336 LIB libspdk_bdev_iscsi.a 00:02:08.336 SO libspdk_bdev_zone_block.so.6.0 00:02:08.336 LIB libspdk_bdev_delay.a 00:02:08.336 SYMLINK libspdk_bdev_passthru.so 00:02:08.336 SYMLINK libspdk_bdev_gpt.so 00:02:08.336 SO libspdk_bdev_iscsi.so.6.0 00:02:08.336 SO libspdk_bdev_aio.so.6.0 00:02:08.336 SYMLINK libspdk_bdev_null.so 00:02:08.336 SYMLINK libspdk_bdev_malloc.so 00:02:08.336 SO libspdk_bdev_delay.so.6.0 00:02:08.336 SYMLINK libspdk_bdev_ftl.so 00:02:08.336 SYMLINK libspdk_bdev_zone_block.so 00:02:08.336 LIB libspdk_bdev_virtio.a 00:02:08.336 SYMLINK libspdk_bdev_aio.so 00:02:08.336 LIB libspdk_bdev_lvol.a 00:02:08.336 SYMLINK libspdk_bdev_iscsi.so 00:02:08.336 SO libspdk_bdev_virtio.so.6.0 00:02:08.336 SYMLINK libspdk_bdev_delay.so 00:02:08.336 SO libspdk_bdev_lvol.so.6.0 00:02:08.336 SYMLINK libspdk_bdev_virtio.so 00:02:08.336 SYMLINK libspdk_bdev_lvol.so 00:02:08.596 LIB libspdk_bdev_raid.a 00:02:08.856 SO libspdk_bdev_raid.so.6.0 00:02:08.856 SYMLINK libspdk_bdev_raid.so 00:02:09.797 LIB libspdk_bdev_nvme.a 00:02:09.797 SO libspdk_bdev_nvme.so.7.1 00:02:09.797 SYMLINK libspdk_bdev_nvme.so 00:02:10.367 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:10.367 CC module/event/subsystems/keyring/keyring.o 00:02:10.367 CC module/event/subsystems/iobuf/iobuf.o 00:02:10.367 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:10.367 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:10.367 CC module/event/subsystems/vmd/vmd.o 00:02:10.367 CC module/event/subsystems/sock/sock.o 00:02:10.367 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:10.367 CC module/event/subsystems/scheduler/scheduler.o 00:02:10.367 CC module/event/subsystems/fsdev/fsdev.o 00:02:10.642 LIB libspdk_event_keyring.a 00:02:10.642 LIB libspdk_event_vhost_blk.a 00:02:10.642 LIB libspdk_event_sock.a 00:02:10.642 LIB libspdk_event_vfu_tgt.a 00:02:10.642 LIB libspdk_event_fsdev.a 00:02:10.642 LIB libspdk_event_scheduler.a 00:02:10.642 LIB libspdk_event_iobuf.a 00:02:10.642 LIB libspdk_event_vmd.a 00:02:10.642 SO libspdk_event_keyring.so.1.0 00:02:10.642 SO libspdk_event_vhost_blk.so.3.0 00:02:10.642 SO libspdk_event_sock.so.5.0 00:02:10.642 SO libspdk_event_fsdev.so.1.0 00:02:10.642 SO libspdk_event_vfu_tgt.so.3.0 00:02:10.642 SO libspdk_event_scheduler.so.4.0 00:02:10.642 SO libspdk_event_iobuf.so.3.0 00:02:10.642 SO libspdk_event_vmd.so.6.0 00:02:10.642 SYMLINK libspdk_event_vhost_blk.so 00:02:10.642 SYMLINK libspdk_event_keyring.so 00:02:10.642 SYMLINK libspdk_event_sock.so 00:02:10.642 SYMLINK libspdk_event_vfu_tgt.so 00:02:10.642 SYMLINK libspdk_event_fsdev.so 00:02:10.642 SYMLINK libspdk_event_scheduler.so 00:02:10.642 SYMLINK libspdk_event_vmd.so 00:02:10.642 SYMLINK libspdk_event_iobuf.so 00:02:11.214 CC module/event/subsystems/accel/accel.o 00:02:11.214 LIB libspdk_event_accel.a 00:02:11.214 SO libspdk_event_accel.so.6.0 00:02:11.214 SYMLINK libspdk_event_accel.so 00:02:11.784 CC module/event/subsystems/bdev/bdev.o 00:02:11.784 LIB libspdk_event_bdev.a 00:02:11.784 SO libspdk_event_bdev.so.6.0 00:02:11.784 SYMLINK libspdk_event_bdev.so 00:02:12.045 CC module/event/subsystems/scsi/scsi.o 00:02:12.045 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:12.045 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:12.045 CC module/event/subsystems/ublk/ublk.o 00:02:12.305 CC module/event/subsystems/nbd/nbd.o 00:02:12.305 LIB libspdk_event_nbd.a 00:02:12.305 LIB libspdk_event_ublk.a 00:02:12.305 LIB libspdk_event_scsi.a 00:02:12.305 SO libspdk_event_ublk.so.3.0 00:02:12.305 SO libspdk_event_nbd.so.6.0 00:02:12.305 SO libspdk_event_scsi.so.6.0 00:02:12.305 LIB libspdk_event_nvmf.a 00:02:12.305 SYMLINK libspdk_event_ublk.so 00:02:12.305 SO libspdk_event_nvmf.so.6.0 00:02:12.305 SYMLINK libspdk_event_nbd.so 00:02:12.305 SYMLINK libspdk_event_scsi.so 00:02:12.565 SYMLINK libspdk_event_nvmf.so 00:02:12.825 CC module/event/subsystems/iscsi/iscsi.o 00:02:12.825 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:12.825 LIB libspdk_event_vhost_scsi.a 00:02:12.825 LIB libspdk_event_iscsi.a 00:02:12.825 SO libspdk_event_vhost_scsi.so.3.0 00:02:12.825 SO libspdk_event_iscsi.so.6.0 00:02:13.086 SYMLINK libspdk_event_vhost_scsi.so 00:02:13.086 SYMLINK libspdk_event_iscsi.so 00:02:13.086 SO libspdk.so.6.0 00:02:13.086 SYMLINK libspdk.so 00:02:13.662 TEST_HEADER include/spdk/accel.h 00:02:13.662 TEST_HEADER include/spdk/accel_module.h 00:02:13.662 CC app/trace_record/trace_record.o 00:02:13.662 TEST_HEADER include/spdk/assert.h 00:02:13.662 TEST_HEADER include/spdk/base64.h 00:02:13.662 CC app/spdk_top/spdk_top.o 00:02:13.662 TEST_HEADER include/spdk/bdev.h 00:02:13.662 TEST_HEADER include/spdk/barrier.h 00:02:13.662 TEST_HEADER include/spdk/bdev_module.h 00:02:13.662 TEST_HEADER include/spdk/bdev_zone.h 00:02:13.662 TEST_HEADER include/spdk/bit_array.h 00:02:13.662 TEST_HEADER include/spdk/bit_pool.h 00:02:13.662 CC app/spdk_nvme_identify/identify.o 00:02:13.662 TEST_HEADER include/spdk/blob_bdev.h 00:02:13.662 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:13.662 TEST_HEADER include/spdk/blobfs.h 00:02:13.662 TEST_HEADER include/spdk/blob.h 00:02:13.662 TEST_HEADER include/spdk/cpuset.h 00:02:13.662 TEST_HEADER include/spdk/conf.h 00:02:13.662 TEST_HEADER include/spdk/config.h 00:02:13.662 TEST_HEADER include/spdk/crc32.h 00:02:13.662 TEST_HEADER include/spdk/crc16.h 00:02:13.662 TEST_HEADER include/spdk/crc64.h 00:02:13.662 TEST_HEADER include/spdk/dif.h 00:02:13.662 CC app/spdk_nvme_discover/discovery_aer.o 00:02:13.662 TEST_HEADER include/spdk/dma.h 00:02:13.662 TEST_HEADER include/spdk/endian.h 00:02:13.662 TEST_HEADER include/spdk/env_dpdk.h 00:02:13.662 TEST_HEADER include/spdk/env.h 00:02:13.662 TEST_HEADER include/spdk/event.h 00:02:13.662 TEST_HEADER include/spdk/fd_group.h 00:02:13.662 TEST_HEADER include/spdk/fd.h 00:02:13.662 TEST_HEADER include/spdk/fsdev.h 00:02:13.662 TEST_HEADER include/spdk/file.h 00:02:13.662 TEST_HEADER include/spdk/fsdev_module.h 00:02:13.662 CC app/spdk_nvme_perf/perf.o 00:02:13.662 CXX app/trace/trace.o 00:02:13.662 TEST_HEADER include/spdk/ftl.h 00:02:13.662 CC test/rpc_client/rpc_client_test.o 00:02:13.662 CC app/spdk_lspci/spdk_lspci.o 00:02:13.662 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:13.662 TEST_HEADER include/spdk/gpt_spec.h 00:02:13.662 TEST_HEADER include/spdk/hexlify.h 00:02:13.662 TEST_HEADER include/spdk/histogram_data.h 00:02:13.662 TEST_HEADER include/spdk/idxd.h 00:02:13.662 TEST_HEADER include/spdk/idxd_spec.h 00:02:13.662 TEST_HEADER include/spdk/init.h 00:02:13.662 TEST_HEADER include/spdk/ioat.h 00:02:13.662 TEST_HEADER include/spdk/ioat_spec.h 00:02:13.662 TEST_HEADER include/spdk/iscsi_spec.h 00:02:13.662 TEST_HEADER include/spdk/json.h 00:02:13.662 TEST_HEADER include/spdk/jsonrpc.h 00:02:13.662 TEST_HEADER include/spdk/keyring_module.h 00:02:13.662 TEST_HEADER include/spdk/keyring.h 00:02:13.662 TEST_HEADER include/spdk/likely.h 00:02:13.662 TEST_HEADER include/spdk/lvol.h 00:02:13.662 TEST_HEADER include/spdk/log.h 00:02:13.662 TEST_HEADER include/spdk/memory.h 00:02:13.662 TEST_HEADER include/spdk/md5.h 00:02:13.662 TEST_HEADER include/spdk/mmio.h 00:02:13.662 TEST_HEADER include/spdk/notify.h 00:02:13.662 TEST_HEADER include/spdk/nbd.h 00:02:13.662 TEST_HEADER include/spdk/net.h 00:02:13.662 TEST_HEADER include/spdk/nvme.h 00:02:13.662 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:13.662 TEST_HEADER include/spdk/nvme_intel.h 00:02:13.662 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:13.662 TEST_HEADER include/spdk/nvme_spec.h 00:02:13.662 TEST_HEADER include/spdk/nvmf.h 00:02:13.662 TEST_HEADER include/spdk/nvme_zns.h 00:02:13.662 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:13.662 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:13.662 TEST_HEADER include/spdk/nvmf_spec.h 00:02:13.662 TEST_HEADER include/spdk/nvmf_transport.h 00:02:13.662 TEST_HEADER include/spdk/opal.h 00:02:13.662 TEST_HEADER include/spdk/opal_spec.h 00:02:13.662 TEST_HEADER include/spdk/pci_ids.h 00:02:13.662 TEST_HEADER include/spdk/pipe.h 00:02:13.662 CC app/iscsi_tgt/iscsi_tgt.o 00:02:13.662 TEST_HEADER include/spdk/reduce.h 00:02:13.662 TEST_HEADER include/spdk/queue.h 00:02:13.662 CC app/spdk_dd/spdk_dd.o 00:02:13.662 TEST_HEADER include/spdk/scheduler.h 00:02:13.662 TEST_HEADER include/spdk/rpc.h 00:02:13.662 TEST_HEADER include/spdk/scsi.h 00:02:13.662 TEST_HEADER include/spdk/sock.h 00:02:13.662 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:13.662 TEST_HEADER include/spdk/scsi_spec.h 00:02:13.662 TEST_HEADER include/spdk/stdinc.h 00:02:13.662 TEST_HEADER include/spdk/trace.h 00:02:13.662 TEST_HEADER include/spdk/thread.h 00:02:13.662 TEST_HEADER include/spdk/trace_parser.h 00:02:13.662 TEST_HEADER include/spdk/string.h 00:02:13.662 TEST_HEADER include/spdk/tree.h 00:02:13.662 TEST_HEADER include/spdk/ublk.h 00:02:13.662 CC app/nvmf_tgt/nvmf_main.o 00:02:13.662 TEST_HEADER include/spdk/util.h 00:02:13.662 TEST_HEADER include/spdk/uuid.h 00:02:13.662 TEST_HEADER include/spdk/version.h 00:02:13.662 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:13.662 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:13.662 TEST_HEADER include/spdk/vmd.h 00:02:13.662 TEST_HEADER include/spdk/vhost.h 00:02:13.662 TEST_HEADER include/spdk/xor.h 00:02:13.662 TEST_HEADER include/spdk/zipf.h 00:02:13.662 CXX test/cpp_headers/accel_module.o 00:02:13.662 CXX test/cpp_headers/assert.o 00:02:13.662 CXX test/cpp_headers/accel.o 00:02:13.662 CXX test/cpp_headers/barrier.o 00:02:13.662 CXX test/cpp_headers/base64.o 00:02:13.662 CXX test/cpp_headers/bdev.o 00:02:13.662 CXX test/cpp_headers/bdev_zone.o 00:02:13.662 CXX test/cpp_headers/bdev_module.o 00:02:13.662 CXX test/cpp_headers/bit_array.o 00:02:13.662 CXX test/cpp_headers/bit_pool.o 00:02:13.662 CXX test/cpp_headers/blob_bdev.o 00:02:13.662 CC app/spdk_tgt/spdk_tgt.o 00:02:13.662 CXX test/cpp_headers/blobfs_bdev.o 00:02:13.662 CXX test/cpp_headers/blobfs.o 00:02:13.662 CXX test/cpp_headers/conf.o 00:02:13.662 CXX test/cpp_headers/blob.o 00:02:13.662 CXX test/cpp_headers/config.o 00:02:13.662 CXX test/cpp_headers/cpuset.o 00:02:13.662 CXX test/cpp_headers/crc16.o 00:02:13.662 CXX test/cpp_headers/crc64.o 00:02:13.662 CXX test/cpp_headers/crc32.o 00:02:13.662 CXX test/cpp_headers/dif.o 00:02:13.662 CXX test/cpp_headers/dma.o 00:02:13.662 CXX test/cpp_headers/env_dpdk.o 00:02:13.662 CXX test/cpp_headers/env.o 00:02:13.662 CXX test/cpp_headers/endian.o 00:02:13.662 CXX test/cpp_headers/fd_group.o 00:02:13.662 CXX test/cpp_headers/event.o 00:02:13.662 CXX test/cpp_headers/file.o 00:02:13.662 CXX test/cpp_headers/fd.o 00:02:13.662 CXX test/cpp_headers/fsdev.o 00:02:13.662 CXX test/cpp_headers/fuse_dispatcher.o 00:02:13.662 CXX test/cpp_headers/fsdev_module.o 00:02:13.662 CXX test/cpp_headers/ftl.o 00:02:13.662 CXX test/cpp_headers/gpt_spec.o 00:02:13.662 CXX test/cpp_headers/hexlify.o 00:02:13.662 CXX test/cpp_headers/histogram_data.o 00:02:13.662 CXX test/cpp_headers/idxd_spec.o 00:02:13.662 CXX test/cpp_headers/idxd.o 00:02:13.662 CXX test/cpp_headers/ioat.o 00:02:13.662 CXX test/cpp_headers/init.o 00:02:13.662 CXX test/cpp_headers/iscsi_spec.o 00:02:13.662 CXX test/cpp_headers/ioat_spec.o 00:02:13.662 CXX test/cpp_headers/json.o 00:02:13.662 CXX test/cpp_headers/jsonrpc.o 00:02:13.662 CXX test/cpp_headers/likely.o 00:02:13.662 CXX test/cpp_headers/keyring.o 00:02:13.662 CXX test/cpp_headers/lvol.o 00:02:13.662 CXX test/cpp_headers/keyring_module.o 00:02:13.662 CXX test/cpp_headers/md5.o 00:02:13.662 CXX test/cpp_headers/memory.o 00:02:13.662 CXX test/cpp_headers/log.o 00:02:13.662 CXX test/cpp_headers/nbd.o 00:02:13.662 CXX test/cpp_headers/mmio.o 00:02:13.662 CXX test/cpp_headers/net.o 00:02:13.662 CXX test/cpp_headers/nvme.o 00:02:13.662 CXX test/cpp_headers/nvme_intel.o 00:02:13.662 CXX test/cpp_headers/notify.o 00:02:13.662 CXX test/cpp_headers/nvme_ocssd.o 00:02:13.662 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:13.662 CXX test/cpp_headers/nvme_zns.o 00:02:13.662 CXX test/cpp_headers/nvme_spec.o 00:02:13.662 CXX test/cpp_headers/nvmf_cmd.o 00:02:13.662 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:13.662 CXX test/cpp_headers/nvmf.o 00:02:13.662 CXX test/cpp_headers/nvmf_transport.o 00:02:13.662 CXX test/cpp_headers/nvmf_spec.o 00:02:13.662 CXX test/cpp_headers/opal.o 00:02:13.662 CC test/app/histogram_perf/histogram_perf.o 00:02:13.662 CC test/app/stub/stub.o 00:02:13.662 CC test/thread/poller_perf/poller_perf.o 00:02:13.662 CC test/app/jsoncat/jsoncat.o 00:02:13.662 CC examples/ioat/verify/verify.o 00:02:13.662 CXX test/cpp_headers/opal_spec.o 00:02:13.662 CC examples/util/zipf/zipf.o 00:02:13.939 CC examples/ioat/perf/perf.o 00:02:13.939 CC test/env/pci/pci_ut.o 00:02:13.939 CC test/app/bdev_svc/bdev_svc.o 00:02:13.939 CC test/env/vtophys/vtophys.o 00:02:13.939 CC app/fio/nvme/fio_plugin.o 00:02:13.939 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:13.939 CC test/env/memory/memory_ut.o 00:02:13.939 LINK spdk_lspci 00:02:13.939 CC test/dma/test_dma/test_dma.o 00:02:13.939 CC app/fio/bdev/fio_plugin.o 00:02:14.247 LINK spdk_nvme_discover 00:02:14.247 LINK nvmf_tgt 00:02:14.247 LINK interrupt_tgt 00:02:14.247 LINK rpc_client_test 00:02:14.247 LINK histogram_perf 00:02:14.247 LINK poller_perf 00:02:14.247 CXX test/cpp_headers/pci_ids.o 00:02:14.247 CXX test/cpp_headers/pipe.o 00:02:14.247 LINK spdk_tgt 00:02:14.247 CC test/env/mem_callbacks/mem_callbacks.o 00:02:14.247 LINK zipf 00:02:14.247 CXX test/cpp_headers/queue.o 00:02:14.247 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:14.247 LINK spdk_trace_record 00:02:14.247 CXX test/cpp_headers/reduce.o 00:02:14.247 CXX test/cpp_headers/rpc.o 00:02:14.247 CXX test/cpp_headers/scheduler.o 00:02:14.247 CXX test/cpp_headers/scsi.o 00:02:14.247 CXX test/cpp_headers/scsi_spec.o 00:02:14.247 LINK iscsi_tgt 00:02:14.247 CXX test/cpp_headers/stdinc.o 00:02:14.247 CXX test/cpp_headers/thread.o 00:02:14.247 CXX test/cpp_headers/sock.o 00:02:14.247 CXX test/cpp_headers/trace.o 00:02:14.247 CXX test/cpp_headers/string.o 00:02:14.247 CXX test/cpp_headers/trace_parser.o 00:02:14.247 CXX test/cpp_headers/ublk.o 00:02:14.247 CXX test/cpp_headers/tree.o 00:02:14.247 CXX test/cpp_headers/uuid.o 00:02:14.247 CXX test/cpp_headers/util.o 00:02:14.247 CXX test/cpp_headers/version.o 00:02:14.247 CXX test/cpp_headers/vfio_user_pci.o 00:02:14.247 CXX test/cpp_headers/vfio_user_spec.o 00:02:14.247 CXX test/cpp_headers/vhost.o 00:02:14.247 CXX test/cpp_headers/vmd.o 00:02:14.247 LINK jsoncat 00:02:14.247 CXX test/cpp_headers/xor.o 00:02:14.247 CXX test/cpp_headers/zipf.o 00:02:14.247 LINK bdev_svc 00:02:14.247 LINK verify 00:02:14.247 LINK spdk_dd 00:02:14.506 LINK stub 00:02:14.506 LINK vtophys 00:02:14.506 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:14.506 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:14.506 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:14.506 LINK env_dpdk_post_init 00:02:14.506 LINK ioat_perf 00:02:14.506 LINK spdk_trace 00:02:14.506 LINK pci_ut 00:02:14.766 CC test/event/event_perf/event_perf.o 00:02:14.766 CC test/event/reactor_perf/reactor_perf.o 00:02:14.766 CC test/event/reactor/reactor.o 00:02:14.766 CC test/event/app_repeat/app_repeat.o 00:02:14.766 CC examples/vmd/lsvmd/lsvmd.o 00:02:14.766 CC examples/idxd/perf/perf.o 00:02:14.766 CC test/event/scheduler/scheduler.o 00:02:14.766 CC examples/sock/hello_world/hello_sock.o 00:02:14.766 CC examples/vmd/led/led.o 00:02:14.766 LINK spdk_nvme 00:02:14.766 LINK spdk_top 00:02:14.766 LINK spdk_nvme_perf 00:02:14.766 LINK test_dma 00:02:14.766 LINK nvme_fuzz 00:02:14.766 CC examples/thread/thread/thread_ex.o 00:02:14.766 LINK spdk_bdev 00:02:14.766 CC app/vhost/vhost.o 00:02:14.766 LINK reactor_perf 00:02:14.766 LINK event_perf 00:02:14.766 LINK reactor 00:02:14.766 LINK lsvmd 00:02:14.766 LINK app_repeat 00:02:15.026 LINK vhost_fuzz 00:02:15.026 LINK led 00:02:15.026 LINK mem_callbacks 00:02:15.026 LINK scheduler 00:02:15.026 LINK hello_sock 00:02:15.026 LINK spdk_nvme_identify 00:02:15.026 LINK idxd_perf 00:02:15.026 LINK vhost 00:02:15.026 LINK thread 00:02:15.286 CC test/nvme/e2edp/nvme_dp.o 00:02:15.286 CC test/nvme/cuse/cuse.o 00:02:15.286 CC test/nvme/aer/aer.o 00:02:15.286 CC test/nvme/sgl/sgl.o 00:02:15.286 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:15.286 CC test/nvme/overhead/overhead.o 00:02:15.286 CC test/nvme/reserve/reserve.o 00:02:15.286 CC test/nvme/fused_ordering/fused_ordering.o 00:02:15.286 CC test/nvme/simple_copy/simple_copy.o 00:02:15.286 CC test/nvme/startup/startup.o 00:02:15.286 CC test/nvme/connect_stress/connect_stress.o 00:02:15.286 CC test/nvme/err_injection/err_injection.o 00:02:15.286 CC test/nvme/fdp/fdp.o 00:02:15.286 CC test/nvme/reset/reset.o 00:02:15.286 CC test/nvme/boot_partition/boot_partition.o 00:02:15.286 CC test/nvme/compliance/nvme_compliance.o 00:02:15.286 CC test/accel/dif/dif.o 00:02:15.286 CC test/blobfs/mkfs/mkfs.o 00:02:15.286 LINK memory_ut 00:02:15.546 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:15.546 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:15.546 CC examples/nvme/hello_world/hello_world.o 00:02:15.546 CC examples/nvme/arbitration/arbitration.o 00:02:15.546 CC examples/nvme/reconnect/reconnect.o 00:02:15.546 CC test/lvol/esnap/esnap.o 00:02:15.546 CC examples/nvme/hotplug/hotplug.o 00:02:15.546 CC examples/nvme/abort/abort.o 00:02:15.546 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:15.546 LINK startup 00:02:15.546 LINK boot_partition 00:02:15.546 LINK err_injection 00:02:15.546 LINK doorbell_aers 00:02:15.546 LINK fused_ordering 00:02:15.546 LINK connect_stress 00:02:15.546 LINK reserve 00:02:15.546 LINK simple_copy 00:02:15.546 CC examples/accel/perf/accel_perf.o 00:02:15.546 LINK nvme_dp 00:02:15.546 LINK aer 00:02:15.546 CC examples/blob/hello_world/hello_blob.o 00:02:15.546 LINK sgl 00:02:15.546 CC examples/blob/cli/blobcli.o 00:02:15.546 LINK reset 00:02:15.546 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:15.546 LINK mkfs 00:02:15.546 LINK overhead 00:02:15.546 LINK fdp 00:02:15.546 LINK nvme_compliance 00:02:15.546 LINK pmr_persistence 00:02:15.546 LINK cmb_copy 00:02:15.806 LINK hello_world 00:02:15.806 LINK hotplug 00:02:15.806 LINK arbitration 00:02:15.806 LINK reconnect 00:02:15.806 LINK iscsi_fuzz 00:02:15.806 LINK abort 00:02:15.806 LINK hello_blob 00:02:15.806 LINK nvme_manage 00:02:15.806 LINK hello_fsdev 00:02:16.066 LINK dif 00:02:16.066 LINK accel_perf 00:02:16.066 LINK blobcli 00:02:16.327 LINK cuse 00:02:16.587 CC examples/bdev/hello_world/hello_bdev.o 00:02:16.587 CC test/bdev/bdevio/bdevio.o 00:02:16.587 CC examples/bdev/bdevperf/bdevperf.o 00:02:16.587 LINK hello_bdev 00:02:16.848 LINK bdevio 00:02:17.107 LINK bdevperf 00:02:17.678 CC examples/nvmf/nvmf/nvmf.o 00:02:17.938 LINK nvmf 00:02:18.879 LINK esnap 00:02:19.139 00:02:19.139 real 0m55.416s 00:02:19.139 user 8m2.226s 00:02:19.139 sys 3m43.798s 00:02:19.139 12:45:16 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:19.139 12:45:16 make -- common/autotest_common.sh@10 -- $ set +x 00:02:19.139 ************************************ 00:02:19.139 END TEST make 00:02:19.139 ************************************ 00:02:19.400 12:45:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:19.400 12:45:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:19.400 12:45:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:19.400 12:45:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.400 12:45:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:19.400 12:45:16 -- pm/common@44 -- $ pid=2047474 00:02:19.400 12:45:16 -- pm/common@50 -- $ kill -TERM 2047474 00:02:19.400 12:45:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.400 12:45:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:19.400 12:45:16 -- pm/common@44 -- $ pid=2047476 00:02:19.400 12:45:16 -- pm/common@50 -- $ kill -TERM 2047476 00:02:19.400 12:45:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.400 12:45:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:19.400 12:45:16 -- pm/common@44 -- $ pid=2047477 00:02:19.400 12:45:16 -- pm/common@50 -- $ kill -TERM 2047477 00:02:19.400 12:45:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.400 12:45:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:19.400 12:45:16 -- pm/common@44 -- $ pid=2047501 00:02:19.400 12:45:16 -- pm/common@50 -- $ sudo -E kill -TERM 2047501 00:02:19.400 12:45:16 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:19.400 12:45:16 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:19.400 12:45:16 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:19.400 12:45:16 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:19.400 12:45:16 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:19.400 12:45:17 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:19.400 12:45:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:19.400 12:45:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:19.400 12:45:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:19.400 12:45:17 -- scripts/common.sh@336 -- # IFS=.-: 00:02:19.400 12:45:17 -- scripts/common.sh@336 -- # read -ra ver1 00:02:19.400 12:45:17 -- scripts/common.sh@337 -- # IFS=.-: 00:02:19.400 12:45:17 -- scripts/common.sh@337 -- # read -ra ver2 00:02:19.400 12:45:17 -- scripts/common.sh@338 -- # local 'op=<' 00:02:19.400 12:45:17 -- scripts/common.sh@340 -- # ver1_l=2 00:02:19.400 12:45:17 -- scripts/common.sh@341 -- # ver2_l=1 00:02:19.400 12:45:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:19.400 12:45:17 -- scripts/common.sh@344 -- # case "$op" in 00:02:19.400 12:45:17 -- scripts/common.sh@345 -- # : 1 00:02:19.400 12:45:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:19.400 12:45:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:19.400 12:45:17 -- scripts/common.sh@365 -- # decimal 1 00:02:19.400 12:45:17 -- scripts/common.sh@353 -- # local d=1 00:02:19.400 12:45:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:19.400 12:45:17 -- scripts/common.sh@355 -- # echo 1 00:02:19.400 12:45:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:19.400 12:45:17 -- scripts/common.sh@366 -- # decimal 2 00:02:19.400 12:45:17 -- scripts/common.sh@353 -- # local d=2 00:02:19.400 12:45:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:19.400 12:45:17 -- scripts/common.sh@355 -- # echo 2 00:02:19.400 12:45:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:19.400 12:45:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:19.400 12:45:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:19.400 12:45:17 -- scripts/common.sh@368 -- # return 0 00:02:19.400 12:45:17 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:19.400 12:45:17 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:19.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:19.400 --rc genhtml_branch_coverage=1 00:02:19.400 --rc genhtml_function_coverage=1 00:02:19.400 --rc genhtml_legend=1 00:02:19.400 --rc geninfo_all_blocks=1 00:02:19.400 --rc geninfo_unexecuted_blocks=1 00:02:19.400 00:02:19.400 ' 00:02:19.400 12:45:17 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:19.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:19.400 --rc genhtml_branch_coverage=1 00:02:19.400 --rc genhtml_function_coverage=1 00:02:19.400 --rc genhtml_legend=1 00:02:19.400 --rc geninfo_all_blocks=1 00:02:19.400 --rc geninfo_unexecuted_blocks=1 00:02:19.400 00:02:19.400 ' 00:02:19.400 12:45:17 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:19.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:19.400 --rc genhtml_branch_coverage=1 00:02:19.400 --rc genhtml_function_coverage=1 00:02:19.400 --rc genhtml_legend=1 00:02:19.400 --rc geninfo_all_blocks=1 00:02:19.400 --rc geninfo_unexecuted_blocks=1 00:02:19.400 00:02:19.400 ' 00:02:19.400 12:45:17 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:19.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:19.400 --rc genhtml_branch_coverage=1 00:02:19.400 --rc genhtml_function_coverage=1 00:02:19.400 --rc genhtml_legend=1 00:02:19.400 --rc geninfo_all_blocks=1 00:02:19.400 --rc geninfo_unexecuted_blocks=1 00:02:19.400 00:02:19.400 ' 00:02:19.400 12:45:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:19.400 12:45:17 -- nvmf/common.sh@7 -- # uname -s 00:02:19.400 12:45:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:19.400 12:45:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:19.400 12:45:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:19.400 12:45:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:19.400 12:45:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:19.400 12:45:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:19.400 12:45:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:19.400 12:45:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:19.400 12:45:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:19.400 12:45:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:19.400 12:45:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:19.400 12:45:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:19.400 12:45:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:19.400 12:45:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:19.400 12:45:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:19.400 12:45:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:19.400 12:45:17 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:19.400 12:45:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:19.400 12:45:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:19.400 12:45:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:19.400 12:45:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:19.400 12:45:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.400 12:45:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.662 12:45:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.662 12:45:17 -- paths/export.sh@5 -- # export PATH 00:02:19.662 12:45:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.662 12:45:17 -- nvmf/common.sh@51 -- # : 0 00:02:19.662 12:45:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:19.662 12:45:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:19.662 12:45:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:19.662 12:45:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:19.662 12:45:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:19.662 12:45:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:19.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:19.662 12:45:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:19.662 12:45:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:19.662 12:45:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:19.662 12:45:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:19.662 12:45:17 -- spdk/autotest.sh@32 -- # uname -s 00:02:19.662 12:45:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:19.662 12:45:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:19.662 12:45:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:19.662 12:45:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:19.662 12:45:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:19.662 12:45:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:19.662 12:45:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:19.662 12:45:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:19.662 12:45:17 -- spdk/autotest.sh@48 -- # udevadm_pid=2110438 00:02:19.662 12:45:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:19.662 12:45:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:19.662 12:45:17 -- pm/common@17 -- # local monitor 00:02:19.662 12:45:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.662 12:45:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.662 12:45:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.662 12:45:17 -- pm/common@21 -- # date +%s 00:02:19.662 12:45:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.662 12:45:17 -- pm/common@21 -- # date +%s 00:02:19.662 12:45:17 -- pm/common@25 -- # sleep 1 00:02:19.662 12:45:17 -- pm/common@21 -- # date +%s 00:02:19.662 12:45:17 -- pm/common@21 -- # date +%s 00:02:19.662 12:45:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731930317 00:02:19.662 12:45:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731930317 00:02:19.662 12:45:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731930317 00:02:19.662 12:45:17 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731930317 00:02:19.662 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731930317_collect-cpu-load.pm.log 00:02:19.662 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731930317_collect-vmstat.pm.log 00:02:19.662 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731930317_collect-cpu-temp.pm.log 00:02:19.662 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731930317_collect-bmc-pm.bmc.pm.log 00:02:20.604 12:45:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:20.604 12:45:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:20.604 12:45:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:20.604 12:45:18 -- common/autotest_common.sh@10 -- # set +x 00:02:20.604 12:45:18 -- spdk/autotest.sh@59 -- # create_test_list 00:02:20.604 12:45:18 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:20.604 12:45:18 -- common/autotest_common.sh@10 -- # set +x 00:02:20.604 12:45:18 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:20.604 12:45:18 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:20.604 12:45:18 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:20.604 12:45:18 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:20.604 12:45:18 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:20.604 12:45:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:20.604 12:45:18 -- common/autotest_common.sh@1455 -- # uname 00:02:20.604 12:45:18 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:20.604 12:45:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:20.604 12:45:18 -- common/autotest_common.sh@1475 -- # uname 00:02:20.604 12:45:18 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:20.604 12:45:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:20.604 12:45:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:20.604 lcov: LCOV version 1.15 00:02:20.604 12:45:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:42.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:42.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:45.859 12:45:43 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:45.859 12:45:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:45.859 12:45:43 -- common/autotest_common.sh@10 -- # set +x 00:02:45.859 12:45:43 -- spdk/autotest.sh@78 -- # rm -f 00:02:45.859 12:45:43 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:49.159 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:49.159 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:49.159 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:49.159 12:45:46 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:49.159 12:45:46 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:49.159 12:45:46 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:49.159 12:45:46 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:49.159 12:45:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:49.159 12:45:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:49.159 12:45:46 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:49.159 12:45:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:49.159 12:45:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:49.159 12:45:46 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:49.159 12:45:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:49.159 12:45:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:49.159 12:45:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:49.159 12:45:46 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:49.159 12:45:46 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:49.159 No valid GPT data, bailing 00:02:49.159 12:45:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:49.159 12:45:46 -- scripts/common.sh@394 -- # pt= 00:02:49.159 12:45:46 -- scripts/common.sh@395 -- # return 1 00:02:49.159 12:45:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:49.159 1+0 records in 00:02:49.159 1+0 records out 00:02:49.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448799 s, 234 MB/s 00:02:49.159 12:45:46 -- spdk/autotest.sh@105 -- # sync 00:02:49.159 12:45:46 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:49.159 12:45:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:49.159 12:45:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:55.747 12:45:52 -- spdk/autotest.sh@111 -- # uname -s 00:02:55.747 12:45:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:55.747 12:45:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:55.747 12:45:52 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:57.661 Hugepages 00:02:57.661 node hugesize free / total 00:02:57.661 node0 1048576kB 0 / 0 00:02:57.661 node0 2048kB 1024 / 1024 00:02:57.661 node1 1048576kB 0 / 0 00:02:57.661 node1 2048kB 1024 / 1024 00:02:57.661 00:02:57.661 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:57.661 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:57.661 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:57.661 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:57.661 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:57.661 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:57.661 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:57.661 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:57.661 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:57.661 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:57.661 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:57.661 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:57.661 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:57.661 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:57.662 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:57.662 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:57.662 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:57.662 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:57.662 12:45:55 -- spdk/autotest.sh@117 -- # uname -s 00:02:57.662 12:45:55 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:57.662 12:45:55 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:57.662 12:45:55 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:00.962 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:00.962 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:01.533 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:01.533 12:45:59 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:02.475 12:46:00 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:02.475 12:46:00 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:02.475 12:46:00 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:02.475 12:46:00 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:02.475 12:46:00 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:02.475 12:46:00 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:02.475 12:46:00 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:02.475 12:46:00 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:02.475 12:46:00 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:02.475 12:46:00 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:02.475 12:46:00 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:02.475 12:46:00 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.775 Waiting for block devices as requested 00:03:05.775 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:05.775 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:05.775 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:05.775 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:05.775 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:05.775 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:05.775 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:06.036 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:06.036 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:06.036 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:06.298 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:06.298 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:06.298 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:06.298 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:06.558 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:06.558 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:06.558 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:06.818 12:46:04 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:06.818 12:46:04 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:06.818 12:46:04 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:06.818 12:46:04 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:03:06.818 12:46:04 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:06.818 12:46:04 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:06.818 12:46:04 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:06.818 12:46:04 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:06.818 12:46:04 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:06.818 12:46:04 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:06.818 12:46:04 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:06.818 12:46:04 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:06.818 12:46:04 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:06.818 12:46:04 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:03:06.818 12:46:04 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:06.818 12:46:04 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:06.818 12:46:04 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:06.818 12:46:04 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:06.818 12:46:04 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:06.818 12:46:04 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:06.818 12:46:04 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:06.818 12:46:04 -- common/autotest_common.sh@1541 -- # continue 00:03:06.818 12:46:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:06.818 12:46:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:06.818 12:46:04 -- common/autotest_common.sh@10 -- # set +x 00:03:06.818 12:46:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:06.819 12:46:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:06.819 12:46:04 -- common/autotest_common.sh@10 -- # set +x 00:03:06.819 12:46:04 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:10.118 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:10.118 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:10.690 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:10.690 12:46:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:10.690 12:46:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:10.690 12:46:08 -- common/autotest_common.sh@10 -- # set +x 00:03:10.690 12:46:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:10.690 12:46:08 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:10.690 12:46:08 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:10.690 12:46:08 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:10.690 12:46:08 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:10.690 12:46:08 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:10.690 12:46:08 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:10.690 12:46:08 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:10.690 12:46:08 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:10.690 12:46:08 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:10.690 12:46:08 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:10.690 12:46:08 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:10.690 12:46:08 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:10.950 12:46:08 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:10.950 12:46:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:10.950 12:46:08 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:10.951 12:46:08 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:10.951 12:46:08 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:03:10.951 12:46:08 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:10.951 12:46:08 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:03:10.951 12:46:08 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:03:10.951 12:46:08 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:03:10.951 12:46:08 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:03:10.951 12:46:08 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2124900 00:03:10.951 12:46:08 -- common/autotest_common.sh@1583 -- # waitforlisten 2124900 00:03:10.951 12:46:08 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:10.951 12:46:08 -- common/autotest_common.sh@833 -- # '[' -z 2124900 ']' 00:03:10.951 12:46:08 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:10.951 12:46:08 -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:10.951 12:46:08 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:10.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:10.951 12:46:08 -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:10.951 12:46:08 -- common/autotest_common.sh@10 -- # set +x 00:03:10.951 [2024-11-18 12:46:08.527000] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:03:10.951 [2024-11-18 12:46:08.527049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124900 ] 00:03:10.951 [2024-11-18 12:46:08.605084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:10.951 [2024-11-18 12:46:08.647139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:11.211 12:46:08 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:11.211 12:46:08 -- common/autotest_common.sh@866 -- # return 0 00:03:11.211 12:46:08 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:11.211 12:46:08 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:11.211 12:46:08 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:14.508 nvme0n1 00:03:14.508 12:46:11 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:14.508 [2024-11-18 12:46:12.053322] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:14.508 request: 00:03:14.508 { 00:03:14.508 "nvme_ctrlr_name": "nvme0", 00:03:14.508 "password": "test", 00:03:14.508 "method": "bdev_nvme_opal_revert", 00:03:14.508 "req_id": 1 00:03:14.508 } 00:03:14.508 Got JSON-RPC error response 00:03:14.508 response: 00:03:14.508 { 00:03:14.508 "code": -32602, 00:03:14.508 "message": "Invalid parameters" 00:03:14.508 } 00:03:14.508 12:46:12 -- common/autotest_common.sh@1589 -- # true 00:03:14.508 12:46:12 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:14.508 12:46:12 -- common/autotest_common.sh@1593 -- # killprocess 2124900 00:03:14.508 12:46:12 -- common/autotest_common.sh@952 -- # '[' -z 2124900 ']' 00:03:14.508 12:46:12 -- common/autotest_common.sh@956 -- # kill -0 2124900 00:03:14.508 12:46:12 -- common/autotest_common.sh@957 -- # uname 00:03:14.508 12:46:12 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:14.508 12:46:12 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2124900 00:03:14.508 12:46:12 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:14.508 12:46:12 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:14.508 12:46:12 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2124900' 00:03:14.508 killing process with pid 2124900 00:03:14.508 12:46:12 -- common/autotest_common.sh@971 -- # kill 2124900 00:03:14.508 12:46:12 -- common/autotest_common.sh@976 -- # wait 2124900 00:03:16.419 12:46:13 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:16.419 12:46:13 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:16.419 12:46:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:16.419 12:46:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:16.419 12:46:13 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:16.419 12:46:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:16.419 12:46:13 -- common/autotest_common.sh@10 -- # set +x 00:03:16.419 12:46:13 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:16.419 12:46:13 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:16.419 12:46:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:16.419 12:46:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:16.419 12:46:13 -- common/autotest_common.sh@10 -- # set +x 00:03:16.419 ************************************ 00:03:16.419 START TEST env 00:03:16.419 ************************************ 00:03:16.419 12:46:13 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:16.419 * Looking for test storage... 00:03:16.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:16.419 12:46:13 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:16.419 12:46:13 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:16.419 12:46:13 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:16.419 12:46:13 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:16.419 12:46:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:16.419 12:46:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:16.419 12:46:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:16.419 12:46:13 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.419 12:46:13 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:16.419 12:46:13 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:16.419 12:46:13 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:16.419 12:46:13 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:16.419 12:46:13 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:16.419 12:46:13 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:16.419 12:46:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:16.419 12:46:13 env -- scripts/common.sh@344 -- # case "$op" in 00:03:16.419 12:46:13 env -- scripts/common.sh@345 -- # : 1 00:03:16.419 12:46:13 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:16.419 12:46:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.419 12:46:13 env -- scripts/common.sh@365 -- # decimal 1 00:03:16.419 12:46:13 env -- scripts/common.sh@353 -- # local d=1 00:03:16.419 12:46:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.419 12:46:13 env -- scripts/common.sh@355 -- # echo 1 00:03:16.419 12:46:13 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:16.419 12:46:13 env -- scripts/common.sh@366 -- # decimal 2 00:03:16.419 12:46:13 env -- scripts/common.sh@353 -- # local d=2 00:03:16.419 12:46:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.419 12:46:13 env -- scripts/common.sh@355 -- # echo 2 00:03:16.419 12:46:13 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:16.419 12:46:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:16.419 12:46:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:16.419 12:46:13 env -- scripts/common.sh@368 -- # return 0 00:03:16.419 12:46:13 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.419 12:46:13 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:16.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.419 --rc genhtml_branch_coverage=1 00:03:16.419 --rc genhtml_function_coverage=1 00:03:16.419 --rc genhtml_legend=1 00:03:16.419 --rc geninfo_all_blocks=1 00:03:16.419 --rc geninfo_unexecuted_blocks=1 00:03:16.419 00:03:16.419 ' 00:03:16.419 12:46:13 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:16.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.419 --rc genhtml_branch_coverage=1 00:03:16.419 --rc genhtml_function_coverage=1 00:03:16.419 --rc genhtml_legend=1 00:03:16.419 --rc geninfo_all_blocks=1 00:03:16.419 --rc geninfo_unexecuted_blocks=1 00:03:16.419 00:03:16.419 ' 00:03:16.419 12:46:13 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:16.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.419 --rc genhtml_branch_coverage=1 00:03:16.419 --rc genhtml_function_coverage=1 00:03:16.419 --rc genhtml_legend=1 00:03:16.419 --rc geninfo_all_blocks=1 00:03:16.419 --rc geninfo_unexecuted_blocks=1 00:03:16.419 00:03:16.419 ' 00:03:16.419 12:46:13 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:16.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.419 --rc genhtml_branch_coverage=1 00:03:16.419 --rc genhtml_function_coverage=1 00:03:16.419 --rc genhtml_legend=1 00:03:16.419 --rc geninfo_all_blocks=1 00:03:16.419 --rc geninfo_unexecuted_blocks=1 00:03:16.419 00:03:16.419 ' 00:03:16.419 12:46:13 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:16.419 12:46:13 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:16.419 12:46:13 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:16.419 12:46:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:16.420 ************************************ 00:03:16.420 START TEST env_memory 00:03:16.420 ************************************ 00:03:16.420 12:46:14 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:16.420 00:03:16.420 00:03:16.420 CUnit - A unit testing framework for C - Version 2.1-3 00:03:16.420 http://cunit.sourceforge.net/ 00:03:16.420 00:03:16.420 00:03:16.420 Suite: memory 00:03:16.420 Test: alloc and free memory map ...[2024-11-18 12:46:14.070876] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:16.420 passed 00:03:16.420 Test: mem map translation ...[2024-11-18 12:46:14.091132] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:16.420 [2024-11-18 12:46:14.091147] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:16.420 [2024-11-18 12:46:14.091184] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:16.420 [2024-11-18 12:46:14.091190] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:16.681 passed 00:03:16.681 Test: mem map registration ...[2024-11-18 12:46:14.131197] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:16.681 [2024-11-18 12:46:14.131212] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:16.681 passed 00:03:16.681 Test: mem map adjacent registrations ...passed 00:03:16.681 00:03:16.681 Run Summary: Type Total Ran Passed Failed Inactive 00:03:16.681 suites 1 1 n/a 0 0 00:03:16.681 tests 4 4 4 0 0 00:03:16.681 asserts 152 152 152 0 n/a 00:03:16.681 00:03:16.681 Elapsed time = 0.141 seconds 00:03:16.681 00:03:16.681 real 0m0.154s 00:03:16.681 user 0m0.146s 00:03:16.681 sys 0m0.008s 00:03:16.681 12:46:14 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:16.681 12:46:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:16.681 ************************************ 00:03:16.681 END TEST env_memory 00:03:16.681 ************************************ 00:03:16.681 12:46:14 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:16.681 12:46:14 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:16.681 12:46:14 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:16.681 12:46:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:16.681 ************************************ 00:03:16.681 START TEST env_vtophys 00:03:16.681 ************************************ 00:03:16.681 12:46:14 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:16.681 EAL: lib.eal log level changed from notice to debug 00:03:16.681 EAL: Detected lcore 0 as core 0 on socket 0 00:03:16.681 EAL: Detected lcore 1 as core 1 on socket 0 00:03:16.681 EAL: Detected lcore 2 as core 2 on socket 0 00:03:16.681 EAL: Detected lcore 3 as core 3 on socket 0 00:03:16.681 EAL: Detected lcore 4 as core 4 on socket 0 00:03:16.681 EAL: Detected lcore 5 as core 5 on socket 0 00:03:16.681 EAL: Detected lcore 6 as core 6 on socket 0 00:03:16.681 EAL: Detected lcore 7 as core 8 on socket 0 00:03:16.681 EAL: Detected lcore 8 as core 9 on socket 0 00:03:16.681 EAL: Detected lcore 9 as core 10 on socket 0 00:03:16.681 EAL: Detected lcore 10 as core 11 on socket 0 00:03:16.681 EAL: Detected lcore 11 as core 12 on socket 0 00:03:16.681 EAL: Detected lcore 12 as core 13 on socket 0 00:03:16.681 EAL: Detected lcore 13 as core 16 on socket 0 00:03:16.681 EAL: Detected lcore 14 as core 17 on socket 0 00:03:16.681 EAL: Detected lcore 15 as core 18 on socket 0 00:03:16.681 EAL: Detected lcore 16 as core 19 on socket 0 00:03:16.681 EAL: Detected lcore 17 as core 20 on socket 0 00:03:16.681 EAL: Detected lcore 18 as core 21 on socket 0 00:03:16.681 EAL: Detected lcore 19 as core 25 on socket 0 00:03:16.681 EAL: Detected lcore 20 as core 26 on socket 0 00:03:16.681 EAL: Detected lcore 21 as core 27 on socket 0 00:03:16.681 EAL: Detected lcore 22 as core 28 on socket 0 00:03:16.681 EAL: Detected lcore 23 as core 29 on socket 0 00:03:16.681 EAL: Detected lcore 24 as core 0 on socket 1 00:03:16.681 EAL: Detected lcore 25 as core 1 on socket 1 00:03:16.681 EAL: Detected lcore 26 as core 2 on socket 1 00:03:16.681 EAL: Detected lcore 27 as core 3 on socket 1 00:03:16.681 EAL: Detected lcore 28 as core 4 on socket 1 00:03:16.681 EAL: Detected lcore 29 as core 5 on socket 1 00:03:16.681 EAL: Detected lcore 30 as core 6 on socket 1 00:03:16.681 EAL: Detected lcore 31 as core 9 on socket 1 00:03:16.681 EAL: Detected lcore 32 as core 10 on socket 1 00:03:16.681 EAL: Detected lcore 33 as core 11 on socket 1 00:03:16.681 EAL: Detected lcore 34 as core 12 on socket 1 00:03:16.681 EAL: Detected lcore 35 as core 13 on socket 1 00:03:16.681 EAL: Detected lcore 36 as core 16 on socket 1 00:03:16.681 EAL: Detected lcore 37 as core 17 on socket 1 00:03:16.681 EAL: Detected lcore 38 as core 18 on socket 1 00:03:16.681 EAL: Detected lcore 39 as core 19 on socket 1 00:03:16.681 EAL: Detected lcore 40 as core 20 on socket 1 00:03:16.681 EAL: Detected lcore 41 as core 21 on socket 1 00:03:16.681 EAL: Detected lcore 42 as core 24 on socket 1 00:03:16.681 EAL: Detected lcore 43 as core 25 on socket 1 00:03:16.681 EAL: Detected lcore 44 as core 26 on socket 1 00:03:16.681 EAL: Detected lcore 45 as core 27 on socket 1 00:03:16.681 EAL: Detected lcore 46 as core 28 on socket 1 00:03:16.681 EAL: Detected lcore 47 as core 29 on socket 1 00:03:16.681 EAL: Detected lcore 48 as core 0 on socket 0 00:03:16.681 EAL: Detected lcore 49 as core 1 on socket 0 00:03:16.681 EAL: Detected lcore 50 as core 2 on socket 0 00:03:16.681 EAL: Detected lcore 51 as core 3 on socket 0 00:03:16.681 EAL: Detected lcore 52 as core 4 on socket 0 00:03:16.681 EAL: Detected lcore 53 as core 5 on socket 0 00:03:16.681 EAL: Detected lcore 54 as core 6 on socket 0 00:03:16.681 EAL: Detected lcore 55 as core 8 on socket 0 00:03:16.681 EAL: Detected lcore 56 as core 9 on socket 0 00:03:16.681 EAL: Detected lcore 57 as core 10 on socket 0 00:03:16.681 EAL: Detected lcore 58 as core 11 on socket 0 00:03:16.681 EAL: Detected lcore 59 as core 12 on socket 0 00:03:16.681 EAL: Detected lcore 60 as core 13 on socket 0 00:03:16.681 EAL: Detected lcore 61 as core 16 on socket 0 00:03:16.681 EAL: Detected lcore 62 as core 17 on socket 0 00:03:16.681 EAL: Detected lcore 63 as core 18 on socket 0 00:03:16.681 EAL: Detected lcore 64 as core 19 on socket 0 00:03:16.681 EAL: Detected lcore 65 as core 20 on socket 0 00:03:16.681 EAL: Detected lcore 66 as core 21 on socket 0 00:03:16.681 EAL: Detected lcore 67 as core 25 on socket 0 00:03:16.681 EAL: Detected lcore 68 as core 26 on socket 0 00:03:16.682 EAL: Detected lcore 69 as core 27 on socket 0 00:03:16.682 EAL: Detected lcore 70 as core 28 on socket 0 00:03:16.682 EAL: Detected lcore 71 as core 29 on socket 0 00:03:16.682 EAL: Detected lcore 72 as core 0 on socket 1 00:03:16.682 EAL: Detected lcore 73 as core 1 on socket 1 00:03:16.682 EAL: Detected lcore 74 as core 2 on socket 1 00:03:16.682 EAL: Detected lcore 75 as core 3 on socket 1 00:03:16.682 EAL: Detected lcore 76 as core 4 on socket 1 00:03:16.682 EAL: Detected lcore 77 as core 5 on socket 1 00:03:16.682 EAL: Detected lcore 78 as core 6 on socket 1 00:03:16.682 EAL: Detected lcore 79 as core 9 on socket 1 00:03:16.682 EAL: Detected lcore 80 as core 10 on socket 1 00:03:16.682 EAL: Detected lcore 81 as core 11 on socket 1 00:03:16.682 EAL: Detected lcore 82 as core 12 on socket 1 00:03:16.682 EAL: Detected lcore 83 as core 13 on socket 1 00:03:16.682 EAL: Detected lcore 84 as core 16 on socket 1 00:03:16.682 EAL: Detected lcore 85 as core 17 on socket 1 00:03:16.682 EAL: Detected lcore 86 as core 18 on socket 1 00:03:16.682 EAL: Detected lcore 87 as core 19 on socket 1 00:03:16.682 EAL: Detected lcore 88 as core 20 on socket 1 00:03:16.682 EAL: Detected lcore 89 as core 21 on socket 1 00:03:16.682 EAL: Detected lcore 90 as core 24 on socket 1 00:03:16.682 EAL: Detected lcore 91 as core 25 on socket 1 00:03:16.682 EAL: Detected lcore 92 as core 26 on socket 1 00:03:16.682 EAL: Detected lcore 93 as core 27 on socket 1 00:03:16.682 EAL: Detected lcore 94 as core 28 on socket 1 00:03:16.682 EAL: Detected lcore 95 as core 29 on socket 1 00:03:16.682 EAL: Maximum logical cores by configuration: 128 00:03:16.682 EAL: Detected CPU lcores: 96 00:03:16.682 EAL: Detected NUMA nodes: 2 00:03:16.682 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:16.682 EAL: Detected shared linkage of DPDK 00:03:16.682 EAL: No shared files mode enabled, IPC will be disabled 00:03:16.682 EAL: Bus pci wants IOVA as 'DC' 00:03:16.682 EAL: Buses did not request a specific IOVA mode. 00:03:16.682 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:16.682 EAL: Selected IOVA mode 'VA' 00:03:16.682 EAL: Probing VFIO support... 00:03:16.682 EAL: IOMMU type 1 (Type 1) is supported 00:03:16.682 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:16.682 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:16.682 EAL: VFIO support initialized 00:03:16.682 EAL: Ask a virtual area of 0x2e000 bytes 00:03:16.682 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:16.682 EAL: Setting up physically contiguous memory... 00:03:16.682 EAL: Setting maximum number of open files to 524288 00:03:16.682 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:16.682 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:16.682 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:16.682 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.682 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:16.682 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:16.682 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.682 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:16.682 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:16.682 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.682 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:16.682 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:16.682 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.682 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:16.682 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:16.682 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.682 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:16.682 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:16.682 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.682 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:16.682 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:16.682 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.682 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:16.682 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:16.682 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.682 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:16.682 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:16.682 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:16.682 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.682 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:16.682 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:16.682 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.682 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:16.682 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:16.682 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.682 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:16.682 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:16.682 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.682 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:16.682 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:16.682 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.682 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:16.682 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:16.682 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.682 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:16.682 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:16.682 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.682 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:16.682 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:16.682 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.682 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:16.682 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:16.682 EAL: Hugepages will be freed exactly as allocated. 00:03:16.682 EAL: No shared files mode enabled, IPC is disabled 00:03:16.682 EAL: No shared files mode enabled, IPC is disabled 00:03:16.682 EAL: TSC frequency is ~2300000 KHz 00:03:16.682 EAL: Main lcore 0 is ready (tid=7f57d4a7fa00;cpuset=[0]) 00:03:16.682 EAL: Trying to obtain current memory policy. 00:03:16.682 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.682 EAL: Restoring previous memory policy: 0 00:03:16.682 EAL: request: mp_malloc_sync 00:03:16.682 EAL: No shared files mode enabled, IPC is disabled 00:03:16.682 EAL: Heap on socket 0 was expanded by 2MB 00:03:16.682 EAL: No shared files mode enabled, IPC is disabled 00:03:16.682 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:16.682 EAL: Mem event callback 'spdk:(nil)' registered 00:03:16.682 00:03:16.682 00:03:16.682 CUnit - A unit testing framework for C - Version 2.1-3 00:03:16.682 http://cunit.sourceforge.net/ 00:03:16.682 00:03:16.682 00:03:16.682 Suite: components_suite 00:03:16.682 Test: vtophys_malloc_test ...passed 00:03:16.682 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:16.682 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.682 EAL: Restoring previous memory policy: 4 00:03:16.682 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.682 EAL: request: mp_malloc_sync 00:03:16.682 EAL: No shared files mode enabled, IPC is disabled 00:03:16.682 EAL: Heap on socket 0 was expanded by 4MB 00:03:16.682 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.682 EAL: request: mp_malloc_sync 00:03:16.682 EAL: No shared files mode enabled, IPC is disabled 00:03:16.682 EAL: Heap on socket 0 was shrunk by 4MB 00:03:16.682 EAL: Trying to obtain current memory policy. 00:03:16.682 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.682 EAL: Restoring previous memory policy: 4 00:03:16.682 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.682 EAL: request: mp_malloc_sync 00:03:16.682 EAL: No shared files mode enabled, IPC is disabled 00:03:16.682 EAL: Heap on socket 0 was expanded by 6MB 00:03:16.682 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.682 EAL: request: mp_malloc_sync 00:03:16.682 EAL: No shared files mode enabled, IPC is disabled 00:03:16.682 EAL: Heap on socket 0 was shrunk by 6MB 00:03:16.682 EAL: Trying to obtain current memory policy. 00:03:16.682 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.682 EAL: Restoring previous memory policy: 4 00:03:16.682 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.682 EAL: request: mp_malloc_sync 00:03:16.682 EAL: No shared files mode enabled, IPC is disabled 00:03:16.682 EAL: Heap on socket 0 was expanded by 10MB 00:03:16.683 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.683 EAL: request: mp_malloc_sync 00:03:16.683 EAL: No shared files mode enabled, IPC is disabled 00:03:16.683 EAL: Heap on socket 0 was shrunk by 10MB 00:03:16.683 EAL: Trying to obtain current memory policy. 00:03:16.683 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.683 EAL: Restoring previous memory policy: 4 00:03:16.683 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.683 EAL: request: mp_malloc_sync 00:03:16.683 EAL: No shared files mode enabled, IPC is disabled 00:03:16.683 EAL: Heap on socket 0 was expanded by 18MB 00:03:16.683 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.683 EAL: request: mp_malloc_sync 00:03:16.683 EAL: No shared files mode enabled, IPC is disabled 00:03:16.683 EAL: Heap on socket 0 was shrunk by 18MB 00:03:16.683 EAL: Trying to obtain current memory policy. 00:03:16.683 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.683 EAL: Restoring previous memory policy: 4 00:03:16.683 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.683 EAL: request: mp_malloc_sync 00:03:16.683 EAL: No shared files mode enabled, IPC is disabled 00:03:16.683 EAL: Heap on socket 0 was expanded by 34MB 00:03:16.683 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.683 EAL: request: mp_malloc_sync 00:03:16.683 EAL: No shared files mode enabled, IPC is disabled 00:03:16.683 EAL: Heap on socket 0 was shrunk by 34MB 00:03:16.683 EAL: Trying to obtain current memory policy. 00:03:16.683 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.683 EAL: Restoring previous memory policy: 4 00:03:16.683 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.683 EAL: request: mp_malloc_sync 00:03:16.683 EAL: No shared files mode enabled, IPC is disabled 00:03:16.683 EAL: Heap on socket 0 was expanded by 66MB 00:03:16.943 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.943 EAL: request: mp_malloc_sync 00:03:16.943 EAL: No shared files mode enabled, IPC is disabled 00:03:16.943 EAL: Heap on socket 0 was shrunk by 66MB 00:03:16.943 EAL: Trying to obtain current memory policy. 00:03:16.943 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.943 EAL: Restoring previous memory policy: 4 00:03:16.943 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.943 EAL: request: mp_malloc_sync 00:03:16.943 EAL: No shared files mode enabled, IPC is disabled 00:03:16.943 EAL: Heap on socket 0 was expanded by 130MB 00:03:16.943 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.943 EAL: request: mp_malloc_sync 00:03:16.943 EAL: No shared files mode enabled, IPC is disabled 00:03:16.943 EAL: Heap on socket 0 was shrunk by 130MB 00:03:16.943 EAL: Trying to obtain current memory policy. 00:03:16.943 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.943 EAL: Restoring previous memory policy: 4 00:03:16.943 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.943 EAL: request: mp_malloc_sync 00:03:16.943 EAL: No shared files mode enabled, IPC is disabled 00:03:16.943 EAL: Heap on socket 0 was expanded by 258MB 00:03:16.943 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.943 EAL: request: mp_malloc_sync 00:03:16.943 EAL: No shared files mode enabled, IPC is disabled 00:03:16.943 EAL: Heap on socket 0 was shrunk by 258MB 00:03:16.943 EAL: Trying to obtain current memory policy. 00:03:16.943 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:17.204 EAL: Restoring previous memory policy: 4 00:03:17.204 EAL: Calling mem event callback 'spdk:(nil)' 00:03:17.204 EAL: request: mp_malloc_sync 00:03:17.204 EAL: No shared files mode enabled, IPC is disabled 00:03:17.204 EAL: Heap on socket 0 was expanded by 514MB 00:03:17.204 EAL: Calling mem event callback 'spdk:(nil)' 00:03:17.204 EAL: request: mp_malloc_sync 00:03:17.204 EAL: No shared files mode enabled, IPC is disabled 00:03:17.204 EAL: Heap on socket 0 was shrunk by 514MB 00:03:17.204 EAL: Trying to obtain current memory policy. 00:03:17.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:17.464 EAL: Restoring previous memory policy: 4 00:03:17.464 EAL: Calling mem event callback 'spdk:(nil)' 00:03:17.464 EAL: request: mp_malloc_sync 00:03:17.464 EAL: No shared files mode enabled, IPC is disabled 00:03:17.464 EAL: Heap on socket 0 was expanded by 1026MB 00:03:17.725 EAL: Calling mem event callback 'spdk:(nil)' 00:03:17.725 EAL: request: mp_malloc_sync 00:03:17.725 EAL: No shared files mode enabled, IPC is disabled 00:03:17.725 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:17.725 passed 00:03:17.725 00:03:17.725 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.725 suites 1 1 n/a 0 0 00:03:17.725 tests 2 2 2 0 0 00:03:17.725 asserts 497 497 497 0 n/a 00:03:17.725 00:03:17.725 Elapsed time = 0.974 seconds 00:03:17.725 EAL: Calling mem event callback 'spdk:(nil)' 00:03:17.725 EAL: request: mp_malloc_sync 00:03:17.725 EAL: No shared files mode enabled, IPC is disabled 00:03:17.725 EAL: Heap on socket 0 was shrunk by 2MB 00:03:17.725 EAL: No shared files mode enabled, IPC is disabled 00:03:17.725 EAL: No shared files mode enabled, IPC is disabled 00:03:17.725 EAL: No shared files mode enabled, IPC is disabled 00:03:17.725 00:03:17.725 real 0m1.111s 00:03:17.725 user 0m0.653s 00:03:17.725 sys 0m0.426s 00:03:17.725 12:46:15 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:17.725 12:46:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:17.725 ************************************ 00:03:17.725 END TEST env_vtophys 00:03:17.725 ************************************ 00:03:17.725 12:46:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:17.725 12:46:15 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:17.725 12:46:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:17.725 12:46:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:17.985 ************************************ 00:03:17.985 START TEST env_pci 00:03:17.985 ************************************ 00:03:17.985 12:46:15 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:17.985 00:03:17.985 00:03:17.985 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.985 http://cunit.sourceforge.net/ 00:03:17.985 00:03:17.985 00:03:17.985 Suite: pci 00:03:17.985 Test: pci_hook ...[2024-11-18 12:46:15.452463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2126187 has claimed it 00:03:17.985 EAL: Cannot find device (10000:00:01.0) 00:03:17.985 EAL: Failed to attach device on primary process 00:03:17.985 passed 00:03:17.985 00:03:17.985 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.985 suites 1 1 n/a 0 0 00:03:17.985 tests 1 1 1 0 0 00:03:17.985 asserts 25 25 25 0 n/a 00:03:17.985 00:03:17.985 Elapsed time = 0.026 seconds 00:03:17.985 00:03:17.985 real 0m0.046s 00:03:17.985 user 0m0.013s 00:03:17.985 sys 0m0.033s 00:03:17.985 12:46:15 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:17.985 12:46:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:17.985 ************************************ 00:03:17.985 END TEST env_pci 00:03:17.985 ************************************ 00:03:17.985 12:46:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:17.985 12:46:15 env -- env/env.sh@15 -- # uname 00:03:17.985 12:46:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:17.985 12:46:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:17.985 12:46:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:17.985 12:46:15 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:17.985 12:46:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:17.985 12:46:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:17.985 ************************************ 00:03:17.985 START TEST env_dpdk_post_init 00:03:17.985 ************************************ 00:03:17.985 12:46:15 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:17.985 EAL: Detected CPU lcores: 96 00:03:17.985 EAL: Detected NUMA nodes: 2 00:03:17.985 EAL: Detected shared linkage of DPDK 00:03:17.985 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:17.985 EAL: Selected IOVA mode 'VA' 00:03:17.985 EAL: VFIO support initialized 00:03:17.985 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:17.985 EAL: Using IOMMU type 1 (Type 1) 00:03:18.245 EAL: Ignore mapping IO port bar(1) 00:03:18.245 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:18.245 EAL: Ignore mapping IO port bar(1) 00:03:18.245 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:18.245 EAL: Ignore mapping IO port bar(1) 00:03:18.245 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:18.245 EAL: Ignore mapping IO port bar(1) 00:03:18.246 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:18.246 EAL: Ignore mapping IO port bar(1) 00:03:18.246 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:18.246 EAL: Ignore mapping IO port bar(1) 00:03:18.246 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:18.246 EAL: Ignore mapping IO port bar(1) 00:03:18.246 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:18.246 EAL: Ignore mapping IO port bar(1) 00:03:18.246 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:18.816 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:19.076 EAL: Ignore mapping IO port bar(1) 00:03:19.076 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:19.076 EAL: Ignore mapping IO port bar(1) 00:03:19.076 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:19.076 EAL: Ignore mapping IO port bar(1) 00:03:19.076 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:19.076 EAL: Ignore mapping IO port bar(1) 00:03:19.076 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:19.076 EAL: Ignore mapping IO port bar(1) 00:03:19.076 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:19.076 EAL: Ignore mapping IO port bar(1) 00:03:19.076 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:19.076 EAL: Ignore mapping IO port bar(1) 00:03:19.076 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:19.076 EAL: Ignore mapping IO port bar(1) 00:03:19.076 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:22.372 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:22.372 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:22.372 Starting DPDK initialization... 00:03:22.372 Starting SPDK post initialization... 00:03:22.372 SPDK NVMe probe 00:03:22.372 Attaching to 0000:5e:00.0 00:03:22.372 Attached to 0000:5e:00.0 00:03:22.372 Cleaning up... 00:03:22.372 00:03:22.373 real 0m4.368s 00:03:22.373 user 0m3.001s 00:03:22.373 sys 0m0.441s 00:03:22.373 12:46:19 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:22.373 12:46:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:22.373 ************************************ 00:03:22.373 END TEST env_dpdk_post_init 00:03:22.373 ************************************ 00:03:22.373 12:46:19 env -- env/env.sh@26 -- # uname 00:03:22.373 12:46:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:22.373 12:46:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:22.373 12:46:19 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:22.373 12:46:19 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:22.373 12:46:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.373 ************************************ 00:03:22.373 START TEST env_mem_callbacks 00:03:22.373 ************************************ 00:03:22.373 12:46:20 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:22.373 EAL: Detected CPU lcores: 96 00:03:22.373 EAL: Detected NUMA nodes: 2 00:03:22.373 EAL: Detected shared linkage of DPDK 00:03:22.373 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:22.373 EAL: Selected IOVA mode 'VA' 00:03:22.373 EAL: VFIO support initialized 00:03:22.373 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:22.373 00:03:22.373 00:03:22.373 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.373 http://cunit.sourceforge.net/ 00:03:22.373 00:03:22.373 00:03:22.373 Suite: memory 00:03:22.373 Test: test ... 00:03:22.373 register 0x200000200000 2097152 00:03:22.373 malloc 3145728 00:03:22.373 register 0x200000400000 4194304 00:03:22.373 buf 0x200000500000 len 3145728 PASSED 00:03:22.373 malloc 64 00:03:22.373 buf 0x2000004fff40 len 64 PASSED 00:03:22.373 malloc 4194304 00:03:22.373 register 0x200000800000 6291456 00:03:22.373 buf 0x200000a00000 len 4194304 PASSED 00:03:22.373 free 0x200000500000 3145728 00:03:22.373 free 0x2000004fff40 64 00:03:22.373 unregister 0x200000400000 4194304 PASSED 00:03:22.373 free 0x200000a00000 4194304 00:03:22.373 unregister 0x200000800000 6291456 PASSED 00:03:22.373 malloc 8388608 00:03:22.373 register 0x200000400000 10485760 00:03:22.373 buf 0x200000600000 len 8388608 PASSED 00:03:22.373 free 0x200000600000 8388608 00:03:22.373 unregister 0x200000400000 10485760 PASSED 00:03:22.373 passed 00:03:22.373 00:03:22.373 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.373 suites 1 1 n/a 0 0 00:03:22.373 tests 1 1 1 0 0 00:03:22.373 asserts 15 15 15 0 n/a 00:03:22.373 00:03:22.373 Elapsed time = 0.007 seconds 00:03:22.373 00:03:22.373 real 0m0.065s 00:03:22.373 user 0m0.019s 00:03:22.373 sys 0m0.046s 00:03:22.373 12:46:20 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:22.373 12:46:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:22.373 ************************************ 00:03:22.373 END TEST env_mem_callbacks 00:03:22.373 ************************************ 00:03:22.633 00:03:22.633 real 0m6.288s 00:03:22.633 user 0m4.083s 00:03:22.633 sys 0m1.281s 00:03:22.633 12:46:20 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:22.633 12:46:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.633 ************************************ 00:03:22.633 END TEST env 00:03:22.633 ************************************ 00:03:22.633 12:46:20 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:22.633 12:46:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:22.633 12:46:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:22.633 12:46:20 -- common/autotest_common.sh@10 -- # set +x 00:03:22.633 ************************************ 00:03:22.633 START TEST rpc 00:03:22.633 ************************************ 00:03:22.633 12:46:20 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:22.633 * Looking for test storage... 00:03:22.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:22.633 12:46:20 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:22.633 12:46:20 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:22.633 12:46:20 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:22.894 12:46:20 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:22.894 12:46:20 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:22.894 12:46:20 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:22.894 12:46:20 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:22.894 12:46:20 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.894 12:46:20 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:22.894 12:46:20 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:22.894 12:46:20 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:22.894 12:46:20 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:22.894 12:46:20 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:22.894 12:46:20 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:22.894 12:46:20 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:22.894 12:46:20 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:22.894 12:46:20 rpc -- scripts/common.sh@345 -- # : 1 00:03:22.894 12:46:20 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:22.894 12:46:20 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.894 12:46:20 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:22.894 12:46:20 rpc -- scripts/common.sh@353 -- # local d=1 00:03:22.894 12:46:20 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.894 12:46:20 rpc -- scripts/common.sh@355 -- # echo 1 00:03:22.894 12:46:20 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:22.894 12:46:20 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:22.894 12:46:20 rpc -- scripts/common.sh@353 -- # local d=2 00:03:22.894 12:46:20 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.894 12:46:20 rpc -- scripts/common.sh@355 -- # echo 2 00:03:22.894 12:46:20 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:22.894 12:46:20 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:22.894 12:46:20 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:22.894 12:46:20 rpc -- scripts/common.sh@368 -- # return 0 00:03:22.894 12:46:20 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.894 12:46:20 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:22.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.895 --rc genhtml_branch_coverage=1 00:03:22.895 --rc genhtml_function_coverage=1 00:03:22.895 --rc genhtml_legend=1 00:03:22.895 --rc geninfo_all_blocks=1 00:03:22.895 --rc geninfo_unexecuted_blocks=1 00:03:22.895 00:03:22.895 ' 00:03:22.895 12:46:20 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:22.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.895 --rc genhtml_branch_coverage=1 00:03:22.895 --rc genhtml_function_coverage=1 00:03:22.895 --rc genhtml_legend=1 00:03:22.895 --rc geninfo_all_blocks=1 00:03:22.895 --rc geninfo_unexecuted_blocks=1 00:03:22.895 00:03:22.895 ' 00:03:22.895 12:46:20 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:22.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.895 --rc genhtml_branch_coverage=1 00:03:22.895 --rc genhtml_function_coverage=1 00:03:22.895 --rc genhtml_legend=1 00:03:22.895 --rc geninfo_all_blocks=1 00:03:22.895 --rc geninfo_unexecuted_blocks=1 00:03:22.895 00:03:22.895 ' 00:03:22.895 12:46:20 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:22.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.895 --rc genhtml_branch_coverage=1 00:03:22.895 --rc genhtml_function_coverage=1 00:03:22.895 --rc genhtml_legend=1 00:03:22.895 --rc geninfo_all_blocks=1 00:03:22.895 --rc geninfo_unexecuted_blocks=1 00:03:22.895 00:03:22.895 ' 00:03:22.895 12:46:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2127050 00:03:22.895 12:46:20 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:22.895 12:46:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:22.895 12:46:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2127050 00:03:22.895 12:46:20 rpc -- common/autotest_common.sh@833 -- # '[' -z 2127050 ']' 00:03:22.895 12:46:20 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:22.895 12:46:20 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:22.895 12:46:20 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:22.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:22.895 12:46:20 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:22.895 12:46:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:22.895 [2024-11-18 12:46:20.402945] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:03:22.895 [2024-11-18 12:46:20.402992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127050 ] 00:03:22.895 [2024-11-18 12:46:20.477765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:22.895 [2024-11-18 12:46:20.517618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:22.895 [2024-11-18 12:46:20.517655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2127050' to capture a snapshot of events at runtime. 00:03:22.895 [2024-11-18 12:46:20.517663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:22.895 [2024-11-18 12:46:20.517672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:22.895 [2024-11-18 12:46:20.517676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2127050 for offline analysis/debug. 00:03:22.895 [2024-11-18 12:46:20.518223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:23.156 12:46:20 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:23.156 12:46:20 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:23.156 12:46:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:23.156 12:46:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:23.156 12:46:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:23.156 12:46:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:23.156 12:46:20 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:23.156 12:46:20 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:23.156 12:46:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.156 ************************************ 00:03:23.156 START TEST rpc_integrity 00:03:23.156 ************************************ 00:03:23.156 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:23.156 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:23.156 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.156 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.156 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.156 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:23.156 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:23.156 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:23.156 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:23.156 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.156 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.156 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.156 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:23.156 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:23.156 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.156 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.156 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.156 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:23.156 { 00:03:23.156 "name": "Malloc0", 00:03:23.156 "aliases": [ 00:03:23.156 "a281331a-4ebf-48fa-9544-409d67206b3f" 00:03:23.156 ], 00:03:23.156 "product_name": "Malloc disk", 00:03:23.156 "block_size": 512, 00:03:23.156 "num_blocks": 16384, 00:03:23.156 "uuid": "a281331a-4ebf-48fa-9544-409d67206b3f", 00:03:23.156 "assigned_rate_limits": { 00:03:23.156 "rw_ios_per_sec": 0, 00:03:23.156 "rw_mbytes_per_sec": 0, 00:03:23.156 "r_mbytes_per_sec": 0, 00:03:23.156 "w_mbytes_per_sec": 0 00:03:23.156 }, 00:03:23.156 "claimed": false, 00:03:23.156 "zoned": false, 00:03:23.156 "supported_io_types": { 00:03:23.156 "read": true, 00:03:23.156 "write": true, 00:03:23.156 "unmap": true, 00:03:23.156 "flush": true, 00:03:23.156 "reset": true, 00:03:23.156 "nvme_admin": false, 00:03:23.156 "nvme_io": false, 00:03:23.156 "nvme_io_md": false, 00:03:23.156 "write_zeroes": true, 00:03:23.156 "zcopy": true, 00:03:23.156 "get_zone_info": false, 00:03:23.156 "zone_management": false, 00:03:23.156 "zone_append": false, 00:03:23.156 "compare": false, 00:03:23.156 "compare_and_write": false, 00:03:23.156 "abort": true, 00:03:23.156 "seek_hole": false, 00:03:23.156 "seek_data": false, 00:03:23.156 "copy": true, 00:03:23.156 "nvme_iov_md": false 00:03:23.156 }, 00:03:23.156 "memory_domains": [ 00:03:23.156 { 00:03:23.156 "dma_device_id": "system", 00:03:23.156 "dma_device_type": 1 00:03:23.156 }, 00:03:23.156 { 00:03:23.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.156 "dma_device_type": 2 00:03:23.156 } 00:03:23.156 ], 00:03:23.156 "driver_specific": {} 00:03:23.156 } 00:03:23.156 ]' 00:03:23.416 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:23.416 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:23.416 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:23.416 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.416 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.416 [2024-11-18 12:46:20.901647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:23.416 [2024-11-18 12:46:20.901680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:23.416 [2024-11-18 12:46:20.901692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ac67d0 00:03:23.416 [2024-11-18 12:46:20.901699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:23.416 [2024-11-18 12:46:20.902835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:23.416 [2024-11-18 12:46:20.902855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:23.416 Passthru0 00:03:23.416 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.416 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:23.416 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.416 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.416 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.416 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:23.416 { 00:03:23.416 "name": "Malloc0", 00:03:23.416 "aliases": [ 00:03:23.416 "a281331a-4ebf-48fa-9544-409d67206b3f" 00:03:23.416 ], 00:03:23.416 "product_name": "Malloc disk", 00:03:23.416 "block_size": 512, 00:03:23.416 "num_blocks": 16384, 00:03:23.416 "uuid": "a281331a-4ebf-48fa-9544-409d67206b3f", 00:03:23.416 "assigned_rate_limits": { 00:03:23.416 "rw_ios_per_sec": 0, 00:03:23.416 "rw_mbytes_per_sec": 0, 00:03:23.416 "r_mbytes_per_sec": 0, 00:03:23.416 "w_mbytes_per_sec": 0 00:03:23.416 }, 00:03:23.416 "claimed": true, 00:03:23.416 "claim_type": "exclusive_write", 00:03:23.416 "zoned": false, 00:03:23.416 "supported_io_types": { 00:03:23.416 "read": true, 00:03:23.416 "write": true, 00:03:23.416 "unmap": true, 00:03:23.416 "flush": true, 00:03:23.416 "reset": true, 00:03:23.416 "nvme_admin": false, 00:03:23.416 "nvme_io": false, 00:03:23.416 "nvme_io_md": false, 00:03:23.416 "write_zeroes": true, 00:03:23.416 "zcopy": true, 00:03:23.416 "get_zone_info": false, 00:03:23.416 "zone_management": false, 00:03:23.416 "zone_append": false, 00:03:23.416 "compare": false, 00:03:23.416 "compare_and_write": false, 00:03:23.417 "abort": true, 00:03:23.417 "seek_hole": false, 00:03:23.417 "seek_data": false, 00:03:23.417 "copy": true, 00:03:23.417 "nvme_iov_md": false 00:03:23.417 }, 00:03:23.417 "memory_domains": [ 00:03:23.417 { 00:03:23.417 "dma_device_id": "system", 00:03:23.417 "dma_device_type": 1 00:03:23.417 }, 00:03:23.417 { 00:03:23.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.417 "dma_device_type": 2 00:03:23.417 } 00:03:23.417 ], 00:03:23.417 "driver_specific": {} 00:03:23.417 }, 00:03:23.417 { 00:03:23.417 "name": "Passthru0", 00:03:23.417 "aliases": [ 00:03:23.417 "550dc883-560a-50c0-b068-07434a57aa7f" 00:03:23.417 ], 00:03:23.417 "product_name": "passthru", 00:03:23.417 "block_size": 512, 00:03:23.417 "num_blocks": 16384, 00:03:23.417 "uuid": "550dc883-560a-50c0-b068-07434a57aa7f", 00:03:23.417 "assigned_rate_limits": { 00:03:23.417 "rw_ios_per_sec": 0, 00:03:23.417 "rw_mbytes_per_sec": 0, 00:03:23.417 "r_mbytes_per_sec": 0, 00:03:23.417 "w_mbytes_per_sec": 0 00:03:23.417 }, 00:03:23.417 "claimed": false, 00:03:23.417 "zoned": false, 00:03:23.417 "supported_io_types": { 00:03:23.417 "read": true, 00:03:23.417 "write": true, 00:03:23.417 "unmap": true, 00:03:23.417 "flush": true, 00:03:23.417 "reset": true, 00:03:23.417 "nvme_admin": false, 00:03:23.417 "nvme_io": false, 00:03:23.417 "nvme_io_md": false, 00:03:23.417 "write_zeroes": true, 00:03:23.417 "zcopy": true, 00:03:23.417 "get_zone_info": false, 00:03:23.417 "zone_management": false, 00:03:23.417 "zone_append": false, 00:03:23.417 "compare": false, 00:03:23.417 "compare_and_write": false, 00:03:23.417 "abort": true, 00:03:23.417 "seek_hole": false, 00:03:23.417 "seek_data": false, 00:03:23.417 "copy": true, 00:03:23.417 "nvme_iov_md": false 00:03:23.417 }, 00:03:23.417 "memory_domains": [ 00:03:23.417 { 00:03:23.417 "dma_device_id": "system", 00:03:23.417 "dma_device_type": 1 00:03:23.417 }, 00:03:23.417 { 00:03:23.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.417 "dma_device_type": 2 00:03:23.417 } 00:03:23.417 ], 00:03:23.417 "driver_specific": { 00:03:23.417 "passthru": { 00:03:23.417 "name": "Passthru0", 00:03:23.417 "base_bdev_name": "Malloc0" 00:03:23.417 } 00:03:23.417 } 00:03:23.417 } 00:03:23.417 ]' 00:03:23.417 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:23.417 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:23.417 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:23.417 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.417 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.417 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.417 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:23.417 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.417 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.417 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.417 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:23.417 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.417 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.417 12:46:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.417 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:23.417 12:46:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:23.417 12:46:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:23.417 00:03:23.417 real 0m0.263s 00:03:23.417 user 0m0.168s 00:03:23.417 sys 0m0.033s 00:03:23.417 12:46:21 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:23.417 12:46:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.417 ************************************ 00:03:23.417 END TEST rpc_integrity 00:03:23.417 ************************************ 00:03:23.417 12:46:21 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:23.417 12:46:21 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:23.417 12:46:21 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:23.417 12:46:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.417 ************************************ 00:03:23.417 START TEST rpc_plugins 00:03:23.417 ************************************ 00:03:23.417 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:23.417 12:46:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:23.417 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.417 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.677 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.677 12:46:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:23.677 12:46:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:23.677 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.677 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.677 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.677 12:46:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:23.677 { 00:03:23.677 "name": "Malloc1", 00:03:23.677 "aliases": [ 00:03:23.677 "f397318a-57db-41ef-999c-4bfc525c1be6" 00:03:23.677 ], 00:03:23.677 "product_name": "Malloc disk", 00:03:23.677 "block_size": 4096, 00:03:23.677 "num_blocks": 256, 00:03:23.677 "uuid": "f397318a-57db-41ef-999c-4bfc525c1be6", 00:03:23.677 "assigned_rate_limits": { 00:03:23.677 "rw_ios_per_sec": 0, 00:03:23.677 "rw_mbytes_per_sec": 0, 00:03:23.677 "r_mbytes_per_sec": 0, 00:03:23.677 "w_mbytes_per_sec": 0 00:03:23.677 }, 00:03:23.677 "claimed": false, 00:03:23.677 "zoned": false, 00:03:23.677 "supported_io_types": { 00:03:23.677 "read": true, 00:03:23.677 "write": true, 00:03:23.677 "unmap": true, 00:03:23.677 "flush": true, 00:03:23.677 "reset": true, 00:03:23.677 "nvme_admin": false, 00:03:23.677 "nvme_io": false, 00:03:23.677 "nvme_io_md": false, 00:03:23.677 "write_zeroes": true, 00:03:23.677 "zcopy": true, 00:03:23.677 "get_zone_info": false, 00:03:23.678 "zone_management": false, 00:03:23.678 "zone_append": false, 00:03:23.678 "compare": false, 00:03:23.678 "compare_and_write": false, 00:03:23.678 "abort": true, 00:03:23.678 "seek_hole": false, 00:03:23.678 "seek_data": false, 00:03:23.678 "copy": true, 00:03:23.678 "nvme_iov_md": false 00:03:23.678 }, 00:03:23.678 "memory_domains": [ 00:03:23.678 { 00:03:23.678 "dma_device_id": "system", 00:03:23.678 "dma_device_type": 1 00:03:23.678 }, 00:03:23.678 { 00:03:23.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.678 "dma_device_type": 2 00:03:23.678 } 00:03:23.678 ], 00:03:23.678 "driver_specific": {} 00:03:23.678 } 00:03:23.678 ]' 00:03:23.678 12:46:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:23.678 12:46:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:23.678 12:46:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:23.678 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.678 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.678 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.678 12:46:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:23.678 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.678 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.678 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.678 12:46:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:23.678 12:46:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:23.678 12:46:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:23.678 00:03:23.678 real 0m0.145s 00:03:23.678 user 0m0.086s 00:03:23.678 sys 0m0.021s 00:03:23.678 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:23.678 12:46:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.678 ************************************ 00:03:23.678 END TEST rpc_plugins 00:03:23.678 ************************************ 00:03:23.678 12:46:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:23.678 12:46:21 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:23.678 12:46:21 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:23.678 12:46:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.678 ************************************ 00:03:23.678 START TEST rpc_trace_cmd_test 00:03:23.678 ************************************ 00:03:23.678 12:46:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:23.678 12:46:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:23.678 12:46:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:23.678 12:46:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.678 12:46:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:23.678 12:46:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.678 12:46:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:23.678 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2127050", 00:03:23.678 "tpoint_group_mask": "0x8", 00:03:23.678 "iscsi_conn": { 00:03:23.678 "mask": "0x2", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "scsi": { 00:03:23.678 "mask": "0x4", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "bdev": { 00:03:23.678 "mask": "0x8", 00:03:23.678 "tpoint_mask": "0xffffffffffffffff" 00:03:23.678 }, 00:03:23.678 "nvmf_rdma": { 00:03:23.678 "mask": "0x10", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "nvmf_tcp": { 00:03:23.678 "mask": "0x20", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "ftl": { 00:03:23.678 "mask": "0x40", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "blobfs": { 00:03:23.678 "mask": "0x80", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "dsa": { 00:03:23.678 "mask": "0x200", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "thread": { 00:03:23.678 "mask": "0x400", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "nvme_pcie": { 00:03:23.678 "mask": "0x800", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "iaa": { 00:03:23.678 "mask": "0x1000", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "nvme_tcp": { 00:03:23.678 "mask": "0x2000", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "bdev_nvme": { 00:03:23.678 "mask": "0x4000", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "sock": { 00:03:23.678 "mask": "0x8000", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "blob": { 00:03:23.678 "mask": "0x10000", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "bdev_raid": { 00:03:23.678 "mask": "0x20000", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 }, 00:03:23.678 "scheduler": { 00:03:23.678 "mask": "0x40000", 00:03:23.678 "tpoint_mask": "0x0" 00:03:23.678 } 00:03:23.678 }' 00:03:23.678 12:46:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:23.938 12:46:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:23.938 12:46:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:23.938 12:46:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:23.938 12:46:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:23.938 12:46:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:23.938 12:46:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:23.938 12:46:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:23.938 12:46:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:23.938 12:46:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:23.938 00:03:23.938 real 0m0.220s 00:03:23.938 user 0m0.190s 00:03:23.938 sys 0m0.022s 00:03:23.938 12:46:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:23.938 12:46:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:23.938 ************************************ 00:03:23.938 END TEST rpc_trace_cmd_test 00:03:23.938 ************************************ 00:03:23.938 12:46:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:23.938 12:46:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:23.938 12:46:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:23.938 12:46:21 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:23.938 12:46:21 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:23.938 12:46:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.938 ************************************ 00:03:23.938 START TEST rpc_daemon_integrity 00:03:23.938 ************************************ 00:03:23.938 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:23.938 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:23.939 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.939 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.939 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:23.939 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:23.939 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:24.199 { 00:03:24.199 "name": "Malloc2", 00:03:24.199 "aliases": [ 00:03:24.199 "ca2b327d-8c33-40d3-ae2a-69c13352a765" 00:03:24.199 ], 00:03:24.199 "product_name": "Malloc disk", 00:03:24.199 "block_size": 512, 00:03:24.199 "num_blocks": 16384, 00:03:24.199 "uuid": "ca2b327d-8c33-40d3-ae2a-69c13352a765", 00:03:24.199 "assigned_rate_limits": { 00:03:24.199 "rw_ios_per_sec": 0, 00:03:24.199 "rw_mbytes_per_sec": 0, 00:03:24.199 "r_mbytes_per_sec": 0, 00:03:24.199 "w_mbytes_per_sec": 0 00:03:24.199 }, 00:03:24.199 "claimed": false, 00:03:24.199 "zoned": false, 00:03:24.199 "supported_io_types": { 00:03:24.199 "read": true, 00:03:24.199 "write": true, 00:03:24.199 "unmap": true, 00:03:24.199 "flush": true, 00:03:24.199 "reset": true, 00:03:24.199 "nvme_admin": false, 00:03:24.199 "nvme_io": false, 00:03:24.199 "nvme_io_md": false, 00:03:24.199 "write_zeroes": true, 00:03:24.199 "zcopy": true, 00:03:24.199 "get_zone_info": false, 00:03:24.199 "zone_management": false, 00:03:24.199 "zone_append": false, 00:03:24.199 "compare": false, 00:03:24.199 "compare_and_write": false, 00:03:24.199 "abort": true, 00:03:24.199 "seek_hole": false, 00:03:24.199 "seek_data": false, 00:03:24.199 "copy": true, 00:03:24.199 "nvme_iov_md": false 00:03:24.199 }, 00:03:24.199 "memory_domains": [ 00:03:24.199 { 00:03:24.199 "dma_device_id": "system", 00:03:24.199 "dma_device_type": 1 00:03:24.199 }, 00:03:24.199 { 00:03:24.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.199 "dma_device_type": 2 00:03:24.199 } 00:03:24.199 ], 00:03:24.199 "driver_specific": {} 00:03:24.199 } 00:03:24.199 ]' 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.199 [2024-11-18 12:46:21.739944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:24.199 [2024-11-18 12:46:21.739971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:24.199 [2024-11-18 12:46:21.739983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b56f60 00:03:24.199 [2024-11-18 12:46:21.739990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:24.199 [2024-11-18 12:46:21.741116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:24.199 [2024-11-18 12:46:21.741138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:24.199 Passthru0 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.199 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:24.199 { 00:03:24.199 "name": "Malloc2", 00:03:24.199 "aliases": [ 00:03:24.199 "ca2b327d-8c33-40d3-ae2a-69c13352a765" 00:03:24.199 ], 00:03:24.199 "product_name": "Malloc disk", 00:03:24.199 "block_size": 512, 00:03:24.200 "num_blocks": 16384, 00:03:24.200 "uuid": "ca2b327d-8c33-40d3-ae2a-69c13352a765", 00:03:24.200 "assigned_rate_limits": { 00:03:24.200 "rw_ios_per_sec": 0, 00:03:24.200 "rw_mbytes_per_sec": 0, 00:03:24.200 "r_mbytes_per_sec": 0, 00:03:24.200 "w_mbytes_per_sec": 0 00:03:24.200 }, 00:03:24.200 "claimed": true, 00:03:24.200 "claim_type": "exclusive_write", 00:03:24.200 "zoned": false, 00:03:24.200 "supported_io_types": { 00:03:24.200 "read": true, 00:03:24.200 "write": true, 00:03:24.200 "unmap": true, 00:03:24.200 "flush": true, 00:03:24.200 "reset": true, 00:03:24.200 "nvme_admin": false, 00:03:24.200 "nvme_io": false, 00:03:24.200 "nvme_io_md": false, 00:03:24.200 "write_zeroes": true, 00:03:24.200 "zcopy": true, 00:03:24.200 "get_zone_info": false, 00:03:24.200 "zone_management": false, 00:03:24.200 "zone_append": false, 00:03:24.200 "compare": false, 00:03:24.200 "compare_and_write": false, 00:03:24.200 "abort": true, 00:03:24.200 "seek_hole": false, 00:03:24.200 "seek_data": false, 00:03:24.200 "copy": true, 00:03:24.200 "nvme_iov_md": false 00:03:24.200 }, 00:03:24.200 "memory_domains": [ 00:03:24.200 { 00:03:24.200 "dma_device_id": "system", 00:03:24.200 "dma_device_type": 1 00:03:24.200 }, 00:03:24.200 { 00:03:24.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.200 "dma_device_type": 2 00:03:24.200 } 00:03:24.200 ], 00:03:24.200 "driver_specific": {} 00:03:24.200 }, 00:03:24.200 { 00:03:24.200 "name": "Passthru0", 00:03:24.200 "aliases": [ 00:03:24.200 "07514cc5-39b3-5dbc-8785-d56accddef35" 00:03:24.200 ], 00:03:24.200 "product_name": "passthru", 00:03:24.200 "block_size": 512, 00:03:24.200 "num_blocks": 16384, 00:03:24.200 "uuid": "07514cc5-39b3-5dbc-8785-d56accddef35", 00:03:24.200 "assigned_rate_limits": { 00:03:24.200 "rw_ios_per_sec": 0, 00:03:24.200 "rw_mbytes_per_sec": 0, 00:03:24.200 "r_mbytes_per_sec": 0, 00:03:24.200 "w_mbytes_per_sec": 0 00:03:24.200 }, 00:03:24.200 "claimed": false, 00:03:24.200 "zoned": false, 00:03:24.200 "supported_io_types": { 00:03:24.200 "read": true, 00:03:24.200 "write": true, 00:03:24.200 "unmap": true, 00:03:24.200 "flush": true, 00:03:24.200 "reset": true, 00:03:24.200 "nvme_admin": false, 00:03:24.200 "nvme_io": false, 00:03:24.200 "nvme_io_md": false, 00:03:24.200 "write_zeroes": true, 00:03:24.200 "zcopy": true, 00:03:24.200 "get_zone_info": false, 00:03:24.200 "zone_management": false, 00:03:24.200 "zone_append": false, 00:03:24.200 "compare": false, 00:03:24.200 "compare_and_write": false, 00:03:24.200 "abort": true, 00:03:24.200 "seek_hole": false, 00:03:24.200 "seek_data": false, 00:03:24.200 "copy": true, 00:03:24.200 "nvme_iov_md": false 00:03:24.200 }, 00:03:24.200 "memory_domains": [ 00:03:24.200 { 00:03:24.200 "dma_device_id": "system", 00:03:24.200 "dma_device_type": 1 00:03:24.200 }, 00:03:24.200 { 00:03:24.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.200 "dma_device_type": 2 00:03:24.200 } 00:03:24.200 ], 00:03:24.200 "driver_specific": { 00:03:24.200 "passthru": { 00:03:24.200 "name": "Passthru0", 00:03:24.200 "base_bdev_name": "Malloc2" 00:03:24.200 } 00:03:24.200 } 00:03:24.200 } 00:03:24.200 ]' 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:24.200 00:03:24.200 real 0m0.278s 00:03:24.200 user 0m0.173s 00:03:24.200 sys 0m0.040s 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:24.200 12:46:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.200 ************************************ 00:03:24.200 END TEST rpc_daemon_integrity 00:03:24.200 ************************************ 00:03:24.460 12:46:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:24.460 12:46:21 rpc -- rpc/rpc.sh@84 -- # killprocess 2127050 00:03:24.460 12:46:21 rpc -- common/autotest_common.sh@952 -- # '[' -z 2127050 ']' 00:03:24.460 12:46:21 rpc -- common/autotest_common.sh@956 -- # kill -0 2127050 00:03:24.460 12:46:21 rpc -- common/autotest_common.sh@957 -- # uname 00:03:24.460 12:46:21 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:24.460 12:46:21 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2127050 00:03:24.460 12:46:21 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:24.460 12:46:21 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:24.460 12:46:21 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2127050' 00:03:24.460 killing process with pid 2127050 00:03:24.460 12:46:21 rpc -- common/autotest_common.sh@971 -- # kill 2127050 00:03:24.460 12:46:21 rpc -- common/autotest_common.sh@976 -- # wait 2127050 00:03:24.721 00:03:24.721 real 0m2.096s 00:03:24.721 user 0m2.685s 00:03:24.721 sys 0m0.676s 00:03:24.721 12:46:22 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:24.721 12:46:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:24.721 ************************************ 00:03:24.721 END TEST rpc 00:03:24.721 ************************************ 00:03:24.721 12:46:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:24.721 12:46:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:24.721 12:46:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:24.721 12:46:22 -- common/autotest_common.sh@10 -- # set +x 00:03:24.721 ************************************ 00:03:24.721 START TEST skip_rpc 00:03:24.721 ************************************ 00:03:24.721 12:46:22 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:24.982 * Looking for test storage... 00:03:24.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:24.982 12:46:22 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:24.982 12:46:22 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:24.982 12:46:22 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:24.982 12:46:22 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:24.982 12:46:22 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:24.982 12:46:22 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:24.982 12:46:22 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:24.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.982 --rc genhtml_branch_coverage=1 00:03:24.982 --rc genhtml_function_coverage=1 00:03:24.982 --rc genhtml_legend=1 00:03:24.982 --rc geninfo_all_blocks=1 00:03:24.982 --rc geninfo_unexecuted_blocks=1 00:03:24.982 00:03:24.982 ' 00:03:24.982 12:46:22 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:24.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.982 --rc genhtml_branch_coverage=1 00:03:24.982 --rc genhtml_function_coverage=1 00:03:24.982 --rc genhtml_legend=1 00:03:24.982 --rc geninfo_all_blocks=1 00:03:24.982 --rc geninfo_unexecuted_blocks=1 00:03:24.982 00:03:24.982 ' 00:03:24.982 12:46:22 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:24.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.982 --rc genhtml_branch_coverage=1 00:03:24.982 --rc genhtml_function_coverage=1 00:03:24.982 --rc genhtml_legend=1 00:03:24.982 --rc geninfo_all_blocks=1 00:03:24.982 --rc geninfo_unexecuted_blocks=1 00:03:24.982 00:03:24.982 ' 00:03:24.982 12:46:22 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:24.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.982 --rc genhtml_branch_coverage=1 00:03:24.983 --rc genhtml_function_coverage=1 00:03:24.983 --rc genhtml_legend=1 00:03:24.983 --rc geninfo_all_blocks=1 00:03:24.983 --rc geninfo_unexecuted_blocks=1 00:03:24.983 00:03:24.983 ' 00:03:24.983 12:46:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:24.983 12:46:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:24.983 12:46:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:24.983 12:46:22 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:24.983 12:46:22 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:24.983 12:46:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:24.983 ************************************ 00:03:24.983 START TEST skip_rpc 00:03:24.983 ************************************ 00:03:24.983 12:46:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:24.983 12:46:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2127685 00:03:24.983 12:46:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:24.983 12:46:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:24.983 12:46:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:24.983 [2024-11-18 12:46:22.599240] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:03:24.983 [2024-11-18 12:46:22.599275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127685 ] 00:03:24.983 [2024-11-18 12:46:22.672748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:25.243 [2024-11-18 12:46:22.713515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2127685 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 2127685 ']' 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 2127685 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2127685 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:30.529 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2127685' 00:03:30.530 killing process with pid 2127685 00:03:30.530 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 2127685 00:03:30.530 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 2127685 00:03:30.530 00:03:30.530 real 0m5.364s 00:03:30.530 user 0m5.124s 00:03:30.530 sys 0m0.277s 00:03:30.530 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:30.530 12:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.530 ************************************ 00:03:30.530 END TEST skip_rpc 00:03:30.530 ************************************ 00:03:30.530 12:46:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:30.530 12:46:27 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:30.530 12:46:27 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:30.530 12:46:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.530 ************************************ 00:03:30.530 START TEST skip_rpc_with_json 00:03:30.530 ************************************ 00:03:30.530 12:46:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:03:30.530 12:46:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:30.530 12:46:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2128631 00:03:30.530 12:46:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:30.530 12:46:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:30.530 12:46:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2128631 00:03:30.530 12:46:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 2128631 ']' 00:03:30.530 12:46:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:30.530 12:46:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:30.530 12:46:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:30.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:30.530 12:46:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:30.530 12:46:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:30.530 [2024-11-18 12:46:28.034911] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:03:30.530 [2024-11-18 12:46:28.034952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2128631 ] 00:03:30.530 [2024-11-18 12:46:28.109488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.530 [2024-11-18 12:46:28.146348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:30.790 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:30.791 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:03:30.791 12:46:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:30.791 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.791 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:30.791 [2024-11-18 12:46:28.370988] nvmf_rpc.c:2868:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:30.791 request: 00:03:30.791 { 00:03:30.791 "trtype": "tcp", 00:03:30.791 "method": "nvmf_get_transports", 00:03:30.791 "req_id": 1 00:03:30.791 } 00:03:30.791 Got JSON-RPC error response 00:03:30.791 response: 00:03:30.791 { 00:03:30.791 "code": -19, 00:03:30.791 "message": "No such device" 00:03:30.791 } 00:03:30.791 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:30.791 12:46:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:30.791 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.791 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:30.791 [2024-11-18 12:46:28.383094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:30.791 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.791 12:46:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:30.791 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.791 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:31.052 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:31.052 12:46:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:31.052 { 00:03:31.052 "subsystems": [ 00:03:31.052 { 00:03:31.052 "subsystem": "fsdev", 00:03:31.052 "config": [ 00:03:31.052 { 00:03:31.052 "method": "fsdev_set_opts", 00:03:31.052 "params": { 00:03:31.052 "fsdev_io_pool_size": 65535, 00:03:31.052 "fsdev_io_cache_size": 256 00:03:31.052 } 00:03:31.052 } 00:03:31.052 ] 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "subsystem": "vfio_user_target", 00:03:31.052 "config": null 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "subsystem": "keyring", 00:03:31.052 "config": [] 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "subsystem": "iobuf", 00:03:31.052 "config": [ 00:03:31.052 { 00:03:31.052 "method": "iobuf_set_options", 00:03:31.052 "params": { 00:03:31.052 "small_pool_count": 8192, 00:03:31.052 "large_pool_count": 1024, 00:03:31.052 "small_bufsize": 8192, 00:03:31.052 "large_bufsize": 135168, 00:03:31.052 "enable_numa": false 00:03:31.052 } 00:03:31.052 } 00:03:31.052 ] 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "subsystem": "sock", 00:03:31.052 "config": [ 00:03:31.052 { 00:03:31.052 "method": "sock_set_default_impl", 00:03:31.052 "params": { 00:03:31.052 "impl_name": "posix" 00:03:31.052 } 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "method": "sock_impl_set_options", 00:03:31.052 "params": { 00:03:31.052 "impl_name": "ssl", 00:03:31.052 "recv_buf_size": 4096, 00:03:31.052 "send_buf_size": 4096, 00:03:31.052 "enable_recv_pipe": true, 00:03:31.052 "enable_quickack": false, 00:03:31.052 "enable_placement_id": 0, 00:03:31.052 "enable_zerocopy_send_server": true, 00:03:31.052 "enable_zerocopy_send_client": false, 00:03:31.052 "zerocopy_threshold": 0, 00:03:31.052 "tls_version": 0, 00:03:31.052 "enable_ktls": false 00:03:31.052 } 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "method": "sock_impl_set_options", 00:03:31.052 "params": { 00:03:31.052 "impl_name": "posix", 00:03:31.052 "recv_buf_size": 2097152, 00:03:31.052 "send_buf_size": 2097152, 00:03:31.052 "enable_recv_pipe": true, 00:03:31.052 "enable_quickack": false, 00:03:31.052 "enable_placement_id": 0, 00:03:31.052 "enable_zerocopy_send_server": true, 00:03:31.052 "enable_zerocopy_send_client": false, 00:03:31.052 "zerocopy_threshold": 0, 00:03:31.052 "tls_version": 0, 00:03:31.052 "enable_ktls": false 00:03:31.052 } 00:03:31.052 } 00:03:31.052 ] 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "subsystem": "vmd", 00:03:31.052 "config": [] 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "subsystem": "accel", 00:03:31.052 "config": [ 00:03:31.052 { 00:03:31.052 "method": "accel_set_options", 00:03:31.052 "params": { 00:03:31.052 "small_cache_size": 128, 00:03:31.052 "large_cache_size": 16, 00:03:31.052 "task_count": 2048, 00:03:31.052 "sequence_count": 2048, 00:03:31.052 "buf_count": 2048 00:03:31.052 } 00:03:31.052 } 00:03:31.052 ] 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "subsystem": "bdev", 00:03:31.052 "config": [ 00:03:31.052 { 00:03:31.052 "method": "bdev_set_options", 00:03:31.052 "params": { 00:03:31.052 "bdev_io_pool_size": 65535, 00:03:31.052 "bdev_io_cache_size": 256, 00:03:31.052 "bdev_auto_examine": true, 00:03:31.052 "iobuf_small_cache_size": 128, 00:03:31.052 "iobuf_large_cache_size": 16 00:03:31.052 } 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "method": "bdev_raid_set_options", 00:03:31.052 "params": { 00:03:31.052 "process_window_size_kb": 1024, 00:03:31.052 "process_max_bandwidth_mb_sec": 0 00:03:31.052 } 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "method": "bdev_iscsi_set_options", 00:03:31.052 "params": { 00:03:31.052 "timeout_sec": 30 00:03:31.052 } 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "method": "bdev_nvme_set_options", 00:03:31.052 "params": { 00:03:31.052 "action_on_timeout": "none", 00:03:31.052 "timeout_us": 0, 00:03:31.052 "timeout_admin_us": 0, 00:03:31.052 "keep_alive_timeout_ms": 10000, 00:03:31.052 "arbitration_burst": 0, 00:03:31.052 "low_priority_weight": 0, 00:03:31.052 "medium_priority_weight": 0, 00:03:31.052 "high_priority_weight": 0, 00:03:31.052 "nvme_adminq_poll_period_us": 10000, 00:03:31.052 "nvme_ioq_poll_period_us": 0, 00:03:31.052 "io_queue_requests": 0, 00:03:31.052 "delay_cmd_submit": true, 00:03:31.052 "transport_retry_count": 4, 00:03:31.052 "bdev_retry_count": 3, 00:03:31.052 "transport_ack_timeout": 0, 00:03:31.052 "ctrlr_loss_timeout_sec": 0, 00:03:31.052 "reconnect_delay_sec": 0, 00:03:31.052 "fast_io_fail_timeout_sec": 0, 00:03:31.052 "disable_auto_failback": false, 00:03:31.052 "generate_uuids": false, 00:03:31.052 "transport_tos": 0, 00:03:31.052 "nvme_error_stat": false, 00:03:31.052 "rdma_srq_size": 0, 00:03:31.052 "io_path_stat": false, 00:03:31.052 "allow_accel_sequence": false, 00:03:31.052 "rdma_max_cq_size": 0, 00:03:31.052 "rdma_cm_event_timeout_ms": 0, 00:03:31.052 "dhchap_digests": [ 00:03:31.052 "sha256", 00:03:31.052 "sha384", 00:03:31.052 "sha512" 00:03:31.052 ], 00:03:31.052 "dhchap_dhgroups": [ 00:03:31.052 "null", 00:03:31.052 "ffdhe2048", 00:03:31.052 "ffdhe3072", 00:03:31.052 "ffdhe4096", 00:03:31.052 "ffdhe6144", 00:03:31.052 "ffdhe8192" 00:03:31.052 ] 00:03:31.052 } 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "method": "bdev_nvme_set_hotplug", 00:03:31.052 "params": { 00:03:31.052 "period_us": 100000, 00:03:31.052 "enable": false 00:03:31.052 } 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "method": "bdev_wait_for_examine" 00:03:31.052 } 00:03:31.052 ] 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "subsystem": "scsi", 00:03:31.052 "config": null 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "subsystem": "scheduler", 00:03:31.052 "config": [ 00:03:31.052 { 00:03:31.052 "method": "framework_set_scheduler", 00:03:31.052 "params": { 00:03:31.052 "name": "static" 00:03:31.052 } 00:03:31.052 } 00:03:31.052 ] 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "subsystem": "vhost_scsi", 00:03:31.052 "config": [] 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "subsystem": "vhost_blk", 00:03:31.052 "config": [] 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "subsystem": "ublk", 00:03:31.052 "config": [] 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "subsystem": "nbd", 00:03:31.052 "config": [] 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "subsystem": "nvmf", 00:03:31.052 "config": [ 00:03:31.052 { 00:03:31.052 "method": "nvmf_set_config", 00:03:31.052 "params": { 00:03:31.052 "discovery_filter": "match_any", 00:03:31.052 "admin_cmd_passthru": { 00:03:31.052 "identify_ctrlr": false 00:03:31.052 }, 00:03:31.052 "dhchap_digests": [ 00:03:31.052 "sha256", 00:03:31.052 "sha384", 00:03:31.052 "sha512" 00:03:31.052 ], 00:03:31.052 "dhchap_dhgroups": [ 00:03:31.052 "null", 00:03:31.052 "ffdhe2048", 00:03:31.052 "ffdhe3072", 00:03:31.052 "ffdhe4096", 00:03:31.052 "ffdhe6144", 00:03:31.052 "ffdhe8192" 00:03:31.052 ] 00:03:31.052 } 00:03:31.052 }, 00:03:31.052 { 00:03:31.052 "method": "nvmf_set_max_subsystems", 00:03:31.052 "params": { 00:03:31.052 "max_subsystems": 1024 00:03:31.052 } 00:03:31.052 }, 00:03:31.053 { 00:03:31.053 "method": "nvmf_set_crdt", 00:03:31.053 "params": { 00:03:31.053 "crdt1": 0, 00:03:31.053 "crdt2": 0, 00:03:31.053 "crdt3": 0 00:03:31.053 } 00:03:31.053 }, 00:03:31.053 { 00:03:31.053 "method": "nvmf_create_transport", 00:03:31.053 "params": { 00:03:31.053 "trtype": "TCP", 00:03:31.053 "max_queue_depth": 128, 00:03:31.053 "max_io_qpairs_per_ctrlr": 127, 00:03:31.053 "in_capsule_data_size": 4096, 00:03:31.053 "max_io_size": 131072, 00:03:31.053 "io_unit_size": 131072, 00:03:31.053 "max_aq_depth": 128, 00:03:31.053 "num_shared_buffers": 511, 00:03:31.053 "buf_cache_size": 4294967295, 00:03:31.053 "dif_insert_or_strip": false, 00:03:31.053 "zcopy": false, 00:03:31.053 "c2h_success": true, 00:03:31.053 "sock_priority": 0, 00:03:31.053 "abort_timeout_sec": 1, 00:03:31.053 "ack_timeout": 0, 00:03:31.053 "data_wr_pool_size": 0 00:03:31.053 } 00:03:31.053 } 00:03:31.053 ] 00:03:31.053 }, 00:03:31.053 { 00:03:31.053 "subsystem": "iscsi", 00:03:31.053 "config": [ 00:03:31.053 { 00:03:31.053 "method": "iscsi_set_options", 00:03:31.053 "params": { 00:03:31.053 "node_base": "iqn.2016-06.io.spdk", 00:03:31.053 "max_sessions": 128, 00:03:31.053 "max_connections_per_session": 2, 00:03:31.053 "max_queue_depth": 64, 00:03:31.053 "default_time2wait": 2, 00:03:31.053 "default_time2retain": 20, 00:03:31.053 "first_burst_length": 8192, 00:03:31.053 "immediate_data": true, 00:03:31.053 "allow_duplicated_isid": false, 00:03:31.053 "error_recovery_level": 0, 00:03:31.053 "nop_timeout": 60, 00:03:31.053 "nop_in_interval": 30, 00:03:31.053 "disable_chap": false, 00:03:31.053 "require_chap": false, 00:03:31.053 "mutual_chap": false, 00:03:31.053 "chap_group": 0, 00:03:31.053 "max_large_datain_per_connection": 64, 00:03:31.053 "max_r2t_per_connection": 4, 00:03:31.053 "pdu_pool_size": 36864, 00:03:31.053 "immediate_data_pool_size": 16384, 00:03:31.053 "data_out_pool_size": 2048 00:03:31.053 } 00:03:31.053 } 00:03:31.053 ] 00:03:31.053 } 00:03:31.053 ] 00:03:31.053 } 00:03:31.053 12:46:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:31.053 12:46:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2128631 00:03:31.053 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2128631 ']' 00:03:31.053 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2128631 00:03:31.053 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:31.053 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:31.053 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2128631 00:03:31.053 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:31.053 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:31.053 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2128631' 00:03:31.053 killing process with pid 2128631 00:03:31.053 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2128631 00:03:31.053 12:46:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2128631 00:03:31.313 12:46:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2128665 00:03:31.313 12:46:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:31.313 12:46:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:36.596 12:46:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2128665 00:03:36.596 12:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2128665 ']' 00:03:36.596 12:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2128665 00:03:36.596 12:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:36.596 12:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:36.596 12:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2128665 00:03:36.596 12:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:36.596 12:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:36.596 12:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2128665' 00:03:36.596 killing process with pid 2128665 00:03:36.596 12:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2128665 00:03:36.596 12:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2128665 00:03:36.596 12:46:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:36.596 12:46:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:36.596 00:03:36.596 real 0m6.290s 00:03:36.596 user 0m6.009s 00:03:36.596 sys 0m0.586s 00:03:36.596 12:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:36.596 12:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:36.596 ************************************ 00:03:36.596 END TEST skip_rpc_with_json 00:03:36.596 ************************************ 00:03:36.857 12:46:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:36.857 12:46:34 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:36.857 12:46:34 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:36.857 12:46:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.857 ************************************ 00:03:36.857 START TEST skip_rpc_with_delay 00:03:36.857 ************************************ 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:36.857 [2024-11-18 12:46:34.396521] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:36.857 00:03:36.857 real 0m0.069s 00:03:36.857 user 0m0.043s 00:03:36.857 sys 0m0.025s 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:36.857 12:46:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:36.857 ************************************ 00:03:36.857 END TEST skip_rpc_with_delay 00:03:36.857 ************************************ 00:03:36.857 12:46:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:36.857 12:46:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:36.857 12:46:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:36.857 12:46:34 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:36.857 12:46:34 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:36.857 12:46:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.857 ************************************ 00:03:36.857 START TEST exit_on_failed_rpc_init 00:03:36.857 ************************************ 00:03:36.857 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:03:36.857 12:46:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2129677 00:03:36.857 12:46:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2129677 00:03:36.857 12:46:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:36.857 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 2129677 ']' 00:03:36.857 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:36.857 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:36.857 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:36.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:36.857 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:36.857 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:36.857 [2024-11-18 12:46:34.538764] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:03:36.857 [2024-11-18 12:46:34.538815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2129677 ] 00:03:37.117 [2024-11-18 12:46:34.617530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.117 [2024-11-18 12:46:34.660392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:37.378 12:46:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:37.378 [2024-11-18 12:46:34.943635] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:03:37.378 [2024-11-18 12:46:34.943681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2129849 ] 00:03:37.378 [2024-11-18 12:46:35.018725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.378 [2024-11-18 12:46:35.059685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:37.378 [2024-11-18 12:46:35.059741] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:37.378 [2024-11-18 12:46:35.059750] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:37.378 [2024-11-18 12:46:35.059759] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2129677 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 2129677 ']' 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 2129677 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2129677 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2129677' 00:03:37.639 killing process with pid 2129677 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 2129677 00:03:37.639 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 2129677 00:03:37.900 00:03:37.900 real 0m0.974s 00:03:37.900 user 0m1.056s 00:03:37.900 sys 0m0.382s 00:03:37.900 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:37.900 12:46:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:37.900 ************************************ 00:03:37.900 END TEST exit_on_failed_rpc_init 00:03:37.900 ************************************ 00:03:37.900 12:46:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:37.900 00:03:37.900 real 0m13.152s 00:03:37.900 user 0m12.447s 00:03:37.900 sys 0m1.544s 00:03:37.900 12:46:35 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:37.900 12:46:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.900 ************************************ 00:03:37.900 END TEST skip_rpc 00:03:37.900 ************************************ 00:03:37.900 12:46:35 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:37.900 12:46:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:37.900 12:46:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:37.900 12:46:35 -- common/autotest_common.sh@10 -- # set +x 00:03:37.900 ************************************ 00:03:37.900 START TEST rpc_client 00:03:37.900 ************************************ 00:03:37.900 12:46:35 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:38.161 * Looking for test storage... 00:03:38.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:38.161 12:46:35 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:38.161 12:46:35 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:03:38.161 12:46:35 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:38.161 12:46:35 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:38.161 12:46:35 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:38.161 12:46:35 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:38.161 12:46:35 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:38.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.161 --rc genhtml_branch_coverage=1 00:03:38.161 --rc genhtml_function_coverage=1 00:03:38.161 --rc genhtml_legend=1 00:03:38.161 --rc geninfo_all_blocks=1 00:03:38.161 --rc geninfo_unexecuted_blocks=1 00:03:38.161 00:03:38.161 ' 00:03:38.161 12:46:35 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:38.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.161 --rc genhtml_branch_coverage=1 00:03:38.161 --rc genhtml_function_coverage=1 00:03:38.161 --rc genhtml_legend=1 00:03:38.161 --rc geninfo_all_blocks=1 00:03:38.161 --rc geninfo_unexecuted_blocks=1 00:03:38.161 00:03:38.161 ' 00:03:38.161 12:46:35 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:38.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.161 --rc genhtml_branch_coverage=1 00:03:38.161 --rc genhtml_function_coverage=1 00:03:38.161 --rc genhtml_legend=1 00:03:38.161 --rc geninfo_all_blocks=1 00:03:38.161 --rc geninfo_unexecuted_blocks=1 00:03:38.161 00:03:38.161 ' 00:03:38.161 12:46:35 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:38.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.161 --rc genhtml_branch_coverage=1 00:03:38.161 --rc genhtml_function_coverage=1 00:03:38.161 --rc genhtml_legend=1 00:03:38.161 --rc geninfo_all_blocks=1 00:03:38.161 --rc geninfo_unexecuted_blocks=1 00:03:38.161 00:03:38.161 ' 00:03:38.161 12:46:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:38.161 OK 00:03:38.161 12:46:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:38.161 00:03:38.161 real 0m0.198s 00:03:38.161 user 0m0.121s 00:03:38.161 sys 0m0.089s 00:03:38.161 12:46:35 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:38.161 12:46:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:38.161 ************************************ 00:03:38.161 END TEST rpc_client 00:03:38.161 ************************************ 00:03:38.161 12:46:35 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:38.161 12:46:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:38.161 12:46:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:38.161 12:46:35 -- common/autotest_common.sh@10 -- # set +x 00:03:38.161 ************************************ 00:03:38.161 START TEST json_config 00:03:38.161 ************************************ 00:03:38.161 12:46:35 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:38.423 12:46:35 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:38.423 12:46:35 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:03:38.423 12:46:35 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:38.423 12:46:35 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:38.423 12:46:35 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:38.423 12:46:35 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:38.423 12:46:35 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:38.423 12:46:35 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:38.423 12:46:35 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:38.423 12:46:35 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:38.423 12:46:35 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:38.423 12:46:35 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:38.423 12:46:35 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:38.423 12:46:35 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:38.423 12:46:35 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:38.423 12:46:35 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:38.423 12:46:35 json_config -- scripts/common.sh@345 -- # : 1 00:03:38.423 12:46:35 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:38.423 12:46:35 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:38.423 12:46:35 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:38.423 12:46:35 json_config -- scripts/common.sh@353 -- # local d=1 00:03:38.423 12:46:35 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:38.423 12:46:35 json_config -- scripts/common.sh@355 -- # echo 1 00:03:38.423 12:46:35 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:38.423 12:46:35 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:38.423 12:46:35 json_config -- scripts/common.sh@353 -- # local d=2 00:03:38.423 12:46:35 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:38.423 12:46:35 json_config -- scripts/common.sh@355 -- # echo 2 00:03:38.423 12:46:35 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:38.423 12:46:35 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:38.423 12:46:35 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:38.423 12:46:35 json_config -- scripts/common.sh@368 -- # return 0 00:03:38.423 12:46:35 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:38.423 12:46:35 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:38.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.423 --rc genhtml_branch_coverage=1 00:03:38.423 --rc genhtml_function_coverage=1 00:03:38.423 --rc genhtml_legend=1 00:03:38.423 --rc geninfo_all_blocks=1 00:03:38.423 --rc geninfo_unexecuted_blocks=1 00:03:38.423 00:03:38.423 ' 00:03:38.423 12:46:35 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:38.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.423 --rc genhtml_branch_coverage=1 00:03:38.423 --rc genhtml_function_coverage=1 00:03:38.423 --rc genhtml_legend=1 00:03:38.423 --rc geninfo_all_blocks=1 00:03:38.423 --rc geninfo_unexecuted_blocks=1 00:03:38.423 00:03:38.423 ' 00:03:38.423 12:46:35 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:38.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.423 --rc genhtml_branch_coverage=1 00:03:38.423 --rc genhtml_function_coverage=1 00:03:38.423 --rc genhtml_legend=1 00:03:38.423 --rc geninfo_all_blocks=1 00:03:38.423 --rc geninfo_unexecuted_blocks=1 00:03:38.423 00:03:38.423 ' 00:03:38.423 12:46:35 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:38.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.423 --rc genhtml_branch_coverage=1 00:03:38.423 --rc genhtml_function_coverage=1 00:03:38.423 --rc genhtml_legend=1 00:03:38.423 --rc geninfo_all_blocks=1 00:03:38.423 --rc geninfo_unexecuted_blocks=1 00:03:38.423 00:03:38.423 ' 00:03:38.423 12:46:35 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:38.423 12:46:35 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:38.423 12:46:35 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:38.423 12:46:35 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:38.423 12:46:35 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:38.423 12:46:35 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:38.423 12:46:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.423 12:46:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.423 12:46:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.423 12:46:35 json_config -- paths/export.sh@5 -- # export PATH 00:03:38.424 12:46:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.424 12:46:35 json_config -- nvmf/common.sh@51 -- # : 0 00:03:38.424 12:46:35 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:38.424 12:46:35 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:38.424 12:46:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:38.424 12:46:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:38.424 12:46:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:38.424 12:46:35 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:38.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:38.424 12:46:35 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:38.424 12:46:35 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:38.424 12:46:35 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:38.424 12:46:35 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:38.424 INFO: JSON configuration test init 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:38.424 12:46:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:38.424 12:46:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:38.424 12:46:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:38.424 12:46:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:38.424 12:46:36 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:38.424 12:46:36 json_config -- json_config/common.sh@9 -- # local app=target 00:03:38.424 12:46:36 json_config -- json_config/common.sh@10 -- # shift 00:03:38.424 12:46:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:38.424 12:46:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:38.424 12:46:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:38.424 12:46:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:38.424 12:46:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:38.424 12:46:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2130131 00:03:38.424 12:46:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:38.424 Waiting for target to run... 00:03:38.424 12:46:36 json_config -- json_config/common.sh@25 -- # waitforlisten 2130131 /var/tmp/spdk_tgt.sock 00:03:38.424 12:46:36 json_config -- common/autotest_common.sh@833 -- # '[' -z 2130131 ']' 00:03:38.424 12:46:36 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:38.424 12:46:36 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:38.424 12:46:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:38.424 12:46:36 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:38.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:38.424 12:46:36 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:38.424 12:46:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:38.424 [2024-11-18 12:46:36.068808] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:03:38.424 [2024-11-18 12:46:36.068860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2130131 ] 00:03:38.685 [2024-11-18 12:46:36.351563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.945 [2024-11-18 12:46:36.385985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.515 12:46:36 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:39.515 12:46:36 json_config -- common/autotest_common.sh@866 -- # return 0 00:03:39.515 12:46:36 json_config -- json_config/common.sh@26 -- # echo '' 00:03:39.515 00:03:39.515 12:46:36 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:39.515 12:46:36 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:39.515 12:46:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:39.515 12:46:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.515 12:46:36 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:39.515 12:46:36 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:39.515 12:46:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:39.515 12:46:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.515 12:46:36 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:39.515 12:46:36 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:39.515 12:46:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:42.813 12:46:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:42.813 12:46:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:42.813 12:46:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@54 -- # sort 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:42.813 12:46:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:42.813 12:46:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:42.813 12:46:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:42.813 12:46:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:42.813 12:46:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:42.813 12:46:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:42.813 MallocForNvmf0 00:03:43.073 12:46:40 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:43.073 12:46:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:43.073 MallocForNvmf1 00:03:43.073 12:46:40 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:43.073 12:46:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:43.332 [2024-11-18 12:46:40.883536] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:43.332 12:46:40 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:43.332 12:46:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:43.592 12:46:41 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:43.592 12:46:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:43.851 12:46:41 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:43.851 12:46:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:43.851 12:46:41 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:43.851 12:46:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:44.111 [2024-11-18 12:46:41.665971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:44.111 12:46:41 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:44.111 12:46:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:44.111 12:46:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.111 12:46:41 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:44.111 12:46:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:44.111 12:46:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.111 12:46:41 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:44.111 12:46:41 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:44.111 12:46:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:44.371 MallocBdevForConfigChangeCheck 00:03:44.371 12:46:41 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:44.371 12:46:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:44.371 12:46:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.371 12:46:41 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:44.371 12:46:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:44.631 12:46:42 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:44.631 INFO: shutting down applications... 00:03:44.631 12:46:42 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:44.631 12:46:42 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:44.631 12:46:42 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:44.631 12:46:42 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:46.540 Calling clear_iscsi_subsystem 00:03:46.540 Calling clear_nvmf_subsystem 00:03:46.540 Calling clear_nbd_subsystem 00:03:46.540 Calling clear_ublk_subsystem 00:03:46.540 Calling clear_vhost_blk_subsystem 00:03:46.540 Calling clear_vhost_scsi_subsystem 00:03:46.540 Calling clear_bdev_subsystem 00:03:46.540 12:46:43 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:46.540 12:46:43 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:46.540 12:46:43 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:46.540 12:46:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:46.540 12:46:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:46.540 12:46:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:46.800 12:46:44 json_config -- json_config/json_config.sh@352 -- # break 00:03:46.800 12:46:44 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:46.800 12:46:44 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:46.800 12:46:44 json_config -- json_config/common.sh@31 -- # local app=target 00:03:46.800 12:46:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:46.800 12:46:44 json_config -- json_config/common.sh@35 -- # [[ -n 2130131 ]] 00:03:46.800 12:46:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2130131 00:03:46.800 12:46:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:46.800 12:46:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:46.800 12:46:44 json_config -- json_config/common.sh@41 -- # kill -0 2130131 00:03:46.800 12:46:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:47.371 12:46:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:47.371 12:46:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:47.371 12:46:44 json_config -- json_config/common.sh@41 -- # kill -0 2130131 00:03:47.371 12:46:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:47.371 12:46:44 json_config -- json_config/common.sh@43 -- # break 00:03:47.371 12:46:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:47.371 12:46:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:47.371 SPDK target shutdown done 00:03:47.371 12:46:44 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:47.371 INFO: relaunching applications... 00:03:47.372 12:46:44 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:47.372 12:46:44 json_config -- json_config/common.sh@9 -- # local app=target 00:03:47.372 12:46:44 json_config -- json_config/common.sh@10 -- # shift 00:03:47.372 12:46:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:47.372 12:46:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:47.372 12:46:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:47.372 12:46:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:47.372 12:46:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:47.372 12:46:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2131725 00:03:47.372 12:46:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:47.372 Waiting for target to run... 00:03:47.372 12:46:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:47.372 12:46:44 json_config -- json_config/common.sh@25 -- # waitforlisten 2131725 /var/tmp/spdk_tgt.sock 00:03:47.372 12:46:44 json_config -- common/autotest_common.sh@833 -- # '[' -z 2131725 ']' 00:03:47.372 12:46:44 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:47.372 12:46:44 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:47.372 12:46:44 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:47.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:47.372 12:46:44 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:47.372 12:46:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.372 [2024-11-18 12:46:44.898796] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:03:47.372 [2024-11-18 12:46:44.898850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131725 ] 00:03:47.943 [2024-11-18 12:46:45.356838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.943 [2024-11-18 12:46:45.415992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.240 [2024-11-18 12:46:48.453623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:51.240 [2024-11-18 12:46:48.485971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:51.500 12:46:49 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:51.501 12:46:49 json_config -- common/autotest_common.sh@866 -- # return 0 00:03:51.501 12:46:49 json_config -- json_config/common.sh@26 -- # echo '' 00:03:51.501 00:03:51.501 12:46:49 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:51.501 12:46:49 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:51.501 INFO: Checking if target configuration is the same... 00:03:51.501 12:46:49 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:51.501 12:46:49 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:51.501 12:46:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:51.501 + '[' 2 -ne 2 ']' 00:03:51.501 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:51.501 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:51.501 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:51.501 +++ basename /dev/fd/62 00:03:51.501 ++ mktemp /tmp/62.XXX 00:03:51.501 + tmp_file_1=/tmp/62.XcY 00:03:51.501 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:51.501 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:51.501 + tmp_file_2=/tmp/spdk_tgt_config.json.e11 00:03:51.501 + ret=0 00:03:51.501 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:52.071 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:52.071 + diff -u /tmp/62.XcY /tmp/spdk_tgt_config.json.e11 00:03:52.071 + echo 'INFO: JSON config files are the same' 00:03:52.071 INFO: JSON config files are the same 00:03:52.071 + rm /tmp/62.XcY /tmp/spdk_tgt_config.json.e11 00:03:52.071 + exit 0 00:03:52.071 12:46:49 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:52.071 12:46:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:52.071 INFO: changing configuration and checking if this can be detected... 00:03:52.071 12:46:49 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:52.071 12:46:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:52.071 12:46:49 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:52.071 12:46:49 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:52.071 12:46:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:52.071 + '[' 2 -ne 2 ']' 00:03:52.071 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:52.071 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:52.071 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.071 +++ basename /dev/fd/62 00:03:52.071 ++ mktemp /tmp/62.XXX 00:03:52.071 + tmp_file_1=/tmp/62.GY7 00:03:52.071 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:52.071 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:52.071 + tmp_file_2=/tmp/spdk_tgt_config.json.XOE 00:03:52.071 + ret=0 00:03:52.071 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:52.642 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:52.642 + diff -u /tmp/62.GY7 /tmp/spdk_tgt_config.json.XOE 00:03:52.642 + ret=1 00:03:52.642 + echo '=== Start of file: /tmp/62.GY7 ===' 00:03:52.642 + cat /tmp/62.GY7 00:03:52.642 + echo '=== End of file: /tmp/62.GY7 ===' 00:03:52.642 + echo '' 00:03:52.642 + echo '=== Start of file: /tmp/spdk_tgt_config.json.XOE ===' 00:03:52.642 + cat /tmp/spdk_tgt_config.json.XOE 00:03:52.642 + echo '=== End of file: /tmp/spdk_tgt_config.json.XOE ===' 00:03:52.642 + echo '' 00:03:52.642 + rm /tmp/62.GY7 /tmp/spdk_tgt_config.json.XOE 00:03:52.642 + exit 1 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:52.642 INFO: configuration change detected. 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@324 -- # [[ -n 2131725 ]] 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.642 12:46:50 json_config -- json_config/json_config.sh@330 -- # killprocess 2131725 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@952 -- # '[' -z 2131725 ']' 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@956 -- # kill -0 2131725 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@957 -- # uname 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2131725 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2131725' 00:03:52.642 killing process with pid 2131725 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@971 -- # kill 2131725 00:03:52.642 12:46:50 json_config -- common/autotest_common.sh@976 -- # wait 2131725 00:03:54.554 12:46:51 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:54.554 12:46:51 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:54.554 12:46:51 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:54.554 12:46:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.554 12:46:51 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:54.554 12:46:51 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:54.554 INFO: Success 00:03:54.554 00:03:54.554 real 0m15.961s 00:03:54.554 user 0m16.635s 00:03:54.554 sys 0m2.592s 00:03:54.554 12:46:51 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.554 12:46:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.554 ************************************ 00:03:54.554 END TEST json_config 00:03:54.554 ************************************ 00:03:54.554 12:46:51 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:54.554 12:46:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:54.554 12:46:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:54.554 12:46:51 -- common/autotest_common.sh@10 -- # set +x 00:03:54.554 ************************************ 00:03:54.554 START TEST json_config_extra_key 00:03:54.554 ************************************ 00:03:54.554 12:46:51 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:54.554 12:46:51 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:54.554 12:46:51 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:03:54.554 12:46:51 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:54.554 12:46:51 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:54.554 12:46:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.554 12:46:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:54.554 12:46:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.554 12:46:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.554 12:46:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.554 12:46:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:54.554 12:46:52 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.554 12:46:52 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:54.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.554 --rc genhtml_branch_coverage=1 00:03:54.554 --rc genhtml_function_coverage=1 00:03:54.554 --rc genhtml_legend=1 00:03:54.554 --rc geninfo_all_blocks=1 00:03:54.554 --rc geninfo_unexecuted_blocks=1 00:03:54.554 00:03:54.554 ' 00:03:54.554 12:46:52 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:54.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.554 --rc genhtml_branch_coverage=1 00:03:54.554 --rc genhtml_function_coverage=1 00:03:54.554 --rc genhtml_legend=1 00:03:54.554 --rc geninfo_all_blocks=1 00:03:54.554 --rc geninfo_unexecuted_blocks=1 00:03:54.554 00:03:54.555 ' 00:03:54.555 12:46:52 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:54.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.555 --rc genhtml_branch_coverage=1 00:03:54.555 --rc genhtml_function_coverage=1 00:03:54.555 --rc genhtml_legend=1 00:03:54.555 --rc geninfo_all_blocks=1 00:03:54.555 --rc geninfo_unexecuted_blocks=1 00:03:54.555 00:03:54.555 ' 00:03:54.555 12:46:52 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:54.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.555 --rc genhtml_branch_coverage=1 00:03:54.555 --rc genhtml_function_coverage=1 00:03:54.555 --rc genhtml_legend=1 00:03:54.555 --rc geninfo_all_blocks=1 00:03:54.555 --rc geninfo_unexecuted_blocks=1 00:03:54.555 00:03:54.555 ' 00:03:54.555 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:54.555 12:46:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:54.555 12:46:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:54.555 12:46:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:54.555 12:46:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:54.555 12:46:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.555 12:46:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.555 12:46:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.555 12:46:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:54.555 12:46:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:54.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:54.555 12:46:52 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:54.555 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:54.555 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:54.555 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:54.555 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:54.555 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:54.555 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:54.555 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:54.555 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:54.555 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:54.555 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:54.555 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:54.555 INFO: launching applications... 00:03:54.555 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:54.555 12:46:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:54.555 12:46:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:54.555 12:46:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:54.555 12:46:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:54.555 12:46:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:54.555 12:46:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:54.555 12:46:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:54.555 12:46:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2133003 00:03:54.555 12:46:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:54.555 Waiting for target to run... 00:03:54.555 12:46:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2133003 /var/tmp/spdk_tgt.sock 00:03:54.555 12:46:52 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 2133003 ']' 00:03:54.555 12:46:52 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:54.555 12:46:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:54.555 12:46:52 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:54.555 12:46:52 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:54.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:54.555 12:46:52 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:54.555 12:46:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:54.555 [2024-11-18 12:46:52.094628] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:03:54.555 [2024-11-18 12:46:52.094679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2133003 ] 00:03:55.125 [2024-11-18 12:46:52.557426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.126 [2024-11-18 12:46:52.609626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.386 12:46:52 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:55.386 12:46:52 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:03:55.386 12:46:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:55.386 00:03:55.386 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:55.386 INFO: shutting down applications... 00:03:55.386 12:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:55.386 12:46:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:55.386 12:46:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:55.386 12:46:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2133003 ]] 00:03:55.386 12:46:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2133003 00:03:55.386 12:46:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:55.386 12:46:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:55.386 12:46:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2133003 00:03:55.386 12:46:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:55.958 12:46:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:55.958 12:46:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:55.958 12:46:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2133003 00:03:55.958 12:46:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:55.958 12:46:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:55.958 12:46:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:55.958 12:46:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:55.958 SPDK target shutdown done 00:03:55.958 12:46:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:55.958 Success 00:03:55.958 00:03:55.958 real 0m1.574s 00:03:55.958 user 0m1.180s 00:03:55.958 sys 0m0.578s 00:03:55.958 12:46:53 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:55.958 12:46:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:55.958 ************************************ 00:03:55.958 END TEST json_config_extra_key 00:03:55.958 ************************************ 00:03:55.958 12:46:53 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:55.958 12:46:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:55.958 12:46:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:55.958 12:46:53 -- common/autotest_common.sh@10 -- # set +x 00:03:55.958 ************************************ 00:03:55.958 START TEST alias_rpc 00:03:55.958 ************************************ 00:03:55.958 12:46:53 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:55.958 * Looking for test storage... 00:03:55.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:55.958 12:46:53 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:55.958 12:46:53 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:55.958 12:46:53 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:56.218 12:46:53 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:56.218 12:46:53 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.218 12:46:53 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.218 12:46:53 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.218 12:46:53 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.218 12:46:53 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.218 12:46:53 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.218 12:46:53 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.218 12:46:53 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.218 12:46:53 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.218 12:46:53 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.218 12:46:53 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.218 12:46:53 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:56.218 12:46:53 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.219 12:46:53 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:56.219 12:46:53 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.219 12:46:53 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:56.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.219 --rc genhtml_branch_coverage=1 00:03:56.219 --rc genhtml_function_coverage=1 00:03:56.219 --rc genhtml_legend=1 00:03:56.219 --rc geninfo_all_blocks=1 00:03:56.219 --rc geninfo_unexecuted_blocks=1 00:03:56.219 00:03:56.219 ' 00:03:56.219 12:46:53 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:56.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.219 --rc genhtml_branch_coverage=1 00:03:56.219 --rc genhtml_function_coverage=1 00:03:56.219 --rc genhtml_legend=1 00:03:56.219 --rc geninfo_all_blocks=1 00:03:56.219 --rc geninfo_unexecuted_blocks=1 00:03:56.219 00:03:56.219 ' 00:03:56.219 12:46:53 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:56.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.219 --rc genhtml_branch_coverage=1 00:03:56.219 --rc genhtml_function_coverage=1 00:03:56.219 --rc genhtml_legend=1 00:03:56.219 --rc geninfo_all_blocks=1 00:03:56.219 --rc geninfo_unexecuted_blocks=1 00:03:56.219 00:03:56.219 ' 00:03:56.219 12:46:53 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:56.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.219 --rc genhtml_branch_coverage=1 00:03:56.219 --rc genhtml_function_coverage=1 00:03:56.219 --rc genhtml_legend=1 00:03:56.219 --rc geninfo_all_blocks=1 00:03:56.219 --rc geninfo_unexecuted_blocks=1 00:03:56.219 00:03:56.219 ' 00:03:56.219 12:46:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:56.219 12:46:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2133304 00:03:56.219 12:46:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2133304 00:03:56.219 12:46:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.219 12:46:53 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 2133304 ']' 00:03:56.219 12:46:53 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.219 12:46:53 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:56.219 12:46:53 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.219 12:46:53 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:56.219 12:46:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.219 [2024-11-18 12:46:53.728917] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:03:56.219 [2024-11-18 12:46:53.728966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2133304 ] 00:03:56.219 [2024-11-18 12:46:53.803600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.219 [2024-11-18 12:46:53.845892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.479 12:46:54 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:56.479 12:46:54 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:03:56.479 12:46:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:56.739 12:46:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2133304 00:03:56.739 12:46:54 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 2133304 ']' 00:03:56.739 12:46:54 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 2133304 00:03:56.739 12:46:54 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:03:56.739 12:46:54 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:56.739 12:46:54 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2133304 00:03:56.739 12:46:54 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:56.739 12:46:54 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:56.739 12:46:54 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2133304' 00:03:56.739 killing process with pid 2133304 00:03:56.739 12:46:54 alias_rpc -- common/autotest_common.sh@971 -- # kill 2133304 00:03:56.739 12:46:54 alias_rpc -- common/autotest_common.sh@976 -- # wait 2133304 00:03:57.000 00:03:57.000 real 0m1.143s 00:03:57.000 user 0m1.159s 00:03:57.000 sys 0m0.418s 00:03:57.000 12:46:54 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:57.000 12:46:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.000 ************************************ 00:03:57.000 END TEST alias_rpc 00:03:57.000 ************************************ 00:03:57.000 12:46:54 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:57.000 12:46:54 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:57.000 12:46:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:57.000 12:46:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:57.000 12:46:54 -- common/autotest_common.sh@10 -- # set +x 00:03:57.260 ************************************ 00:03:57.260 START TEST spdkcli_tcp 00:03:57.260 ************************************ 00:03:57.260 12:46:54 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:57.260 * Looking for test storage... 00:03:57.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:57.260 12:46:54 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:57.260 12:46:54 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:03:57.260 12:46:54 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:57.260 12:46:54 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.260 12:46:54 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:57.260 12:46:54 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.260 12:46:54 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:57.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.260 --rc genhtml_branch_coverage=1 00:03:57.260 --rc genhtml_function_coverage=1 00:03:57.260 --rc genhtml_legend=1 00:03:57.260 --rc geninfo_all_blocks=1 00:03:57.260 --rc geninfo_unexecuted_blocks=1 00:03:57.260 00:03:57.260 ' 00:03:57.260 12:46:54 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:57.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.260 --rc genhtml_branch_coverage=1 00:03:57.260 --rc genhtml_function_coverage=1 00:03:57.260 --rc genhtml_legend=1 00:03:57.260 --rc geninfo_all_blocks=1 00:03:57.260 --rc geninfo_unexecuted_blocks=1 00:03:57.260 00:03:57.260 ' 00:03:57.260 12:46:54 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:57.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.260 --rc genhtml_branch_coverage=1 00:03:57.260 --rc genhtml_function_coverage=1 00:03:57.260 --rc genhtml_legend=1 00:03:57.260 --rc geninfo_all_blocks=1 00:03:57.260 --rc geninfo_unexecuted_blocks=1 00:03:57.260 00:03:57.260 ' 00:03:57.260 12:46:54 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:57.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.260 --rc genhtml_branch_coverage=1 00:03:57.260 --rc genhtml_function_coverage=1 00:03:57.260 --rc genhtml_legend=1 00:03:57.260 --rc geninfo_all_blocks=1 00:03:57.260 --rc geninfo_unexecuted_blocks=1 00:03:57.260 00:03:57.260 ' 00:03:57.260 12:46:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:57.260 12:46:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:57.261 12:46:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:57.261 12:46:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:57.261 12:46:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:57.261 12:46:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:57.261 12:46:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:57.261 12:46:54 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:57.261 12:46:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:57.261 12:46:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2133580 00:03:57.261 12:46:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:57.261 12:46:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2133580 00:03:57.261 12:46:54 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 2133580 ']' 00:03:57.261 12:46:54 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.261 12:46:54 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:57.261 12:46:54 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.261 12:46:54 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:57.261 12:46:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:57.261 [2024-11-18 12:46:54.955746] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:03:57.261 [2024-11-18 12:46:54.955796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2133580 ] 00:03:57.521 [2024-11-18 12:46:55.031465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:57.521 [2024-11-18 12:46:55.073309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:57.521 [2024-11-18 12:46:55.073310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.781 12:46:55 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:57.781 12:46:55 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:03:57.781 12:46:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2133760 00:03:57.781 12:46:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:57.781 12:46:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:57.781 [ 00:03:57.781 "bdev_malloc_delete", 00:03:57.781 "bdev_malloc_create", 00:03:57.781 "bdev_null_resize", 00:03:57.781 "bdev_null_delete", 00:03:57.781 "bdev_null_create", 00:03:57.781 "bdev_nvme_cuse_unregister", 00:03:57.781 "bdev_nvme_cuse_register", 00:03:57.781 "bdev_opal_new_user", 00:03:57.781 "bdev_opal_set_lock_state", 00:03:57.781 "bdev_opal_delete", 00:03:57.781 "bdev_opal_get_info", 00:03:57.781 "bdev_opal_create", 00:03:57.781 "bdev_nvme_opal_revert", 00:03:57.781 "bdev_nvme_opal_init", 00:03:57.781 "bdev_nvme_send_cmd", 00:03:57.781 "bdev_nvme_set_keys", 00:03:57.781 "bdev_nvme_get_path_iostat", 00:03:57.781 "bdev_nvme_get_mdns_discovery_info", 00:03:57.781 "bdev_nvme_stop_mdns_discovery", 00:03:57.781 "bdev_nvme_start_mdns_discovery", 00:03:57.781 "bdev_nvme_set_multipath_policy", 00:03:57.781 "bdev_nvme_set_preferred_path", 00:03:57.781 "bdev_nvme_get_io_paths", 00:03:57.781 "bdev_nvme_remove_error_injection", 00:03:57.781 "bdev_nvme_add_error_injection", 00:03:57.781 "bdev_nvme_get_discovery_info", 00:03:57.781 "bdev_nvme_stop_discovery", 00:03:57.781 "bdev_nvme_start_discovery", 00:03:57.781 "bdev_nvme_get_controller_health_info", 00:03:57.781 "bdev_nvme_disable_controller", 00:03:57.781 "bdev_nvme_enable_controller", 00:03:57.781 "bdev_nvme_reset_controller", 00:03:57.781 "bdev_nvme_get_transport_statistics", 00:03:57.781 "bdev_nvme_apply_firmware", 00:03:57.781 "bdev_nvme_detach_controller", 00:03:57.781 "bdev_nvme_get_controllers", 00:03:57.781 "bdev_nvme_attach_controller", 00:03:57.781 "bdev_nvme_set_hotplug", 00:03:57.781 "bdev_nvme_set_options", 00:03:57.781 "bdev_passthru_delete", 00:03:57.781 "bdev_passthru_create", 00:03:57.781 "bdev_lvol_set_parent_bdev", 00:03:57.781 "bdev_lvol_set_parent", 00:03:57.781 "bdev_lvol_check_shallow_copy", 00:03:57.781 "bdev_lvol_start_shallow_copy", 00:03:57.781 "bdev_lvol_grow_lvstore", 00:03:57.781 "bdev_lvol_get_lvols", 00:03:57.781 "bdev_lvol_get_lvstores", 00:03:57.781 "bdev_lvol_delete", 00:03:57.781 "bdev_lvol_set_read_only", 00:03:57.781 "bdev_lvol_resize", 00:03:57.781 "bdev_lvol_decouple_parent", 00:03:57.781 "bdev_lvol_inflate", 00:03:57.781 "bdev_lvol_rename", 00:03:57.781 "bdev_lvol_clone_bdev", 00:03:57.781 "bdev_lvol_clone", 00:03:57.781 "bdev_lvol_snapshot", 00:03:57.781 "bdev_lvol_create", 00:03:57.781 "bdev_lvol_delete_lvstore", 00:03:57.781 "bdev_lvol_rename_lvstore", 00:03:57.781 "bdev_lvol_create_lvstore", 00:03:57.781 "bdev_raid_set_options", 00:03:57.781 "bdev_raid_remove_base_bdev", 00:03:57.781 "bdev_raid_add_base_bdev", 00:03:57.781 "bdev_raid_delete", 00:03:57.781 "bdev_raid_create", 00:03:57.781 "bdev_raid_get_bdevs", 00:03:57.781 "bdev_error_inject_error", 00:03:57.781 "bdev_error_delete", 00:03:57.781 "bdev_error_create", 00:03:57.781 "bdev_split_delete", 00:03:57.781 "bdev_split_create", 00:03:57.781 "bdev_delay_delete", 00:03:57.781 "bdev_delay_create", 00:03:57.781 "bdev_delay_update_latency", 00:03:57.781 "bdev_zone_block_delete", 00:03:57.781 "bdev_zone_block_create", 00:03:57.781 "blobfs_create", 00:03:57.781 "blobfs_detect", 00:03:57.781 "blobfs_set_cache_size", 00:03:57.781 "bdev_aio_delete", 00:03:57.781 "bdev_aio_rescan", 00:03:57.781 "bdev_aio_create", 00:03:57.781 "bdev_ftl_set_property", 00:03:57.781 "bdev_ftl_get_properties", 00:03:57.781 "bdev_ftl_get_stats", 00:03:57.781 "bdev_ftl_unmap", 00:03:57.781 "bdev_ftl_unload", 00:03:57.781 "bdev_ftl_delete", 00:03:57.781 "bdev_ftl_load", 00:03:57.782 "bdev_ftl_create", 00:03:57.782 "bdev_virtio_attach_controller", 00:03:57.782 "bdev_virtio_scsi_get_devices", 00:03:57.782 "bdev_virtio_detach_controller", 00:03:57.782 "bdev_virtio_blk_set_hotplug", 00:03:57.782 "bdev_iscsi_delete", 00:03:57.782 "bdev_iscsi_create", 00:03:57.782 "bdev_iscsi_set_options", 00:03:57.782 "accel_error_inject_error", 00:03:57.782 "ioat_scan_accel_module", 00:03:57.782 "dsa_scan_accel_module", 00:03:57.782 "iaa_scan_accel_module", 00:03:57.782 "vfu_virtio_create_fs_endpoint", 00:03:57.782 "vfu_virtio_create_scsi_endpoint", 00:03:57.782 "vfu_virtio_scsi_remove_target", 00:03:57.782 "vfu_virtio_scsi_add_target", 00:03:57.782 "vfu_virtio_create_blk_endpoint", 00:03:57.782 "vfu_virtio_delete_endpoint", 00:03:57.782 "keyring_file_remove_key", 00:03:57.782 "keyring_file_add_key", 00:03:57.782 "keyring_linux_set_options", 00:03:57.782 "fsdev_aio_delete", 00:03:57.782 "fsdev_aio_create", 00:03:57.782 "iscsi_get_histogram", 00:03:57.782 "iscsi_enable_histogram", 00:03:57.782 "iscsi_set_options", 00:03:57.782 "iscsi_get_auth_groups", 00:03:57.782 "iscsi_auth_group_remove_secret", 00:03:57.782 "iscsi_auth_group_add_secret", 00:03:57.782 "iscsi_delete_auth_group", 00:03:57.782 "iscsi_create_auth_group", 00:03:57.782 "iscsi_set_discovery_auth", 00:03:57.782 "iscsi_get_options", 00:03:57.782 "iscsi_target_node_request_logout", 00:03:57.782 "iscsi_target_node_set_redirect", 00:03:57.782 "iscsi_target_node_set_auth", 00:03:57.782 "iscsi_target_node_add_lun", 00:03:57.782 "iscsi_get_stats", 00:03:57.782 "iscsi_get_connections", 00:03:57.782 "iscsi_portal_group_set_auth", 00:03:57.782 "iscsi_start_portal_group", 00:03:57.782 "iscsi_delete_portal_group", 00:03:57.782 "iscsi_create_portal_group", 00:03:57.782 "iscsi_get_portal_groups", 00:03:57.782 "iscsi_delete_target_node", 00:03:57.782 "iscsi_target_node_remove_pg_ig_maps", 00:03:57.782 "iscsi_target_node_add_pg_ig_maps", 00:03:57.782 "iscsi_create_target_node", 00:03:57.782 "iscsi_get_target_nodes", 00:03:57.782 "iscsi_delete_initiator_group", 00:03:57.782 "iscsi_initiator_group_remove_initiators", 00:03:57.782 "iscsi_initiator_group_add_initiators", 00:03:57.782 "iscsi_create_initiator_group", 00:03:57.782 "iscsi_get_initiator_groups", 00:03:57.782 "nvmf_set_crdt", 00:03:57.782 "nvmf_set_config", 00:03:57.782 "nvmf_set_max_subsystems", 00:03:57.782 "nvmf_stop_mdns_prr", 00:03:57.782 "nvmf_publish_mdns_prr", 00:03:57.782 "nvmf_subsystem_get_listeners", 00:03:57.782 "nvmf_subsystem_get_qpairs", 00:03:57.782 "nvmf_subsystem_get_controllers", 00:03:57.782 "nvmf_get_stats", 00:03:57.782 "nvmf_get_transports", 00:03:57.782 "nvmf_create_transport", 00:03:57.782 "nvmf_get_targets", 00:03:57.782 "nvmf_delete_target", 00:03:57.782 "nvmf_create_target", 00:03:57.782 "nvmf_subsystem_allow_any_host", 00:03:57.782 "nvmf_subsystem_set_keys", 00:03:57.782 "nvmf_discovery_referral_remove_host", 00:03:57.782 "nvmf_discovery_referral_add_host", 00:03:57.782 "nvmf_subsystem_remove_host", 00:03:57.782 "nvmf_subsystem_add_host", 00:03:57.782 "nvmf_ns_remove_host", 00:03:57.782 "nvmf_ns_add_host", 00:03:57.782 "nvmf_subsystem_remove_ns", 00:03:57.782 "nvmf_subsystem_set_ns_ana_group", 00:03:57.782 "nvmf_subsystem_add_ns", 00:03:57.782 "nvmf_subsystem_listener_set_ana_state", 00:03:57.782 "nvmf_discovery_get_referrals", 00:03:57.782 "nvmf_discovery_remove_referral", 00:03:57.782 "nvmf_discovery_add_referral", 00:03:57.782 "nvmf_subsystem_remove_listener", 00:03:57.782 "nvmf_subsystem_add_listener", 00:03:57.782 "nvmf_delete_subsystem", 00:03:57.782 "nvmf_create_subsystem", 00:03:57.782 "nvmf_get_subsystems", 00:03:57.782 "env_dpdk_get_mem_stats", 00:03:57.782 "nbd_get_disks", 00:03:57.782 "nbd_stop_disk", 00:03:57.782 "nbd_start_disk", 00:03:57.782 "ublk_recover_disk", 00:03:57.782 "ublk_get_disks", 00:03:57.782 "ublk_stop_disk", 00:03:57.782 "ublk_start_disk", 00:03:57.782 "ublk_destroy_target", 00:03:57.782 "ublk_create_target", 00:03:57.782 "virtio_blk_create_transport", 00:03:57.782 "virtio_blk_get_transports", 00:03:57.782 "vhost_controller_set_coalescing", 00:03:57.782 "vhost_get_controllers", 00:03:57.782 "vhost_delete_controller", 00:03:57.782 "vhost_create_blk_controller", 00:03:57.782 "vhost_scsi_controller_remove_target", 00:03:57.782 "vhost_scsi_controller_add_target", 00:03:57.782 "vhost_start_scsi_controller", 00:03:57.782 "vhost_create_scsi_controller", 00:03:57.782 "thread_set_cpumask", 00:03:57.782 "scheduler_set_options", 00:03:57.782 "framework_get_governor", 00:03:57.782 "framework_get_scheduler", 00:03:57.782 "framework_set_scheduler", 00:03:57.782 "framework_get_reactors", 00:03:57.782 "thread_get_io_channels", 00:03:57.782 "thread_get_pollers", 00:03:57.782 "thread_get_stats", 00:03:57.782 "framework_monitor_context_switch", 00:03:57.782 "spdk_kill_instance", 00:03:57.782 "log_enable_timestamps", 00:03:57.782 "log_get_flags", 00:03:57.782 "log_clear_flag", 00:03:57.782 "log_set_flag", 00:03:57.782 "log_get_level", 00:03:57.782 "log_set_level", 00:03:57.782 "log_get_print_level", 00:03:57.782 "log_set_print_level", 00:03:57.782 "framework_enable_cpumask_locks", 00:03:57.782 "framework_disable_cpumask_locks", 00:03:57.782 "framework_wait_init", 00:03:57.782 "framework_start_init", 00:03:57.782 "scsi_get_devices", 00:03:57.782 "bdev_get_histogram", 00:03:57.782 "bdev_enable_histogram", 00:03:57.782 "bdev_set_qos_limit", 00:03:57.782 "bdev_set_qd_sampling_period", 00:03:57.782 "bdev_get_bdevs", 00:03:57.782 "bdev_reset_iostat", 00:03:57.782 "bdev_get_iostat", 00:03:57.782 "bdev_examine", 00:03:57.782 "bdev_wait_for_examine", 00:03:57.782 "bdev_set_options", 00:03:57.782 "accel_get_stats", 00:03:57.782 "accel_set_options", 00:03:57.782 "accel_set_driver", 00:03:57.782 "accel_crypto_key_destroy", 00:03:57.782 "accel_crypto_keys_get", 00:03:57.782 "accel_crypto_key_create", 00:03:57.782 "accel_assign_opc", 00:03:57.782 "accel_get_module_info", 00:03:57.782 "accel_get_opc_assignments", 00:03:57.782 "vmd_rescan", 00:03:57.782 "vmd_remove_device", 00:03:57.782 "vmd_enable", 00:03:57.782 "sock_get_default_impl", 00:03:57.782 "sock_set_default_impl", 00:03:57.782 "sock_impl_set_options", 00:03:57.782 "sock_impl_get_options", 00:03:57.782 "iobuf_get_stats", 00:03:57.782 "iobuf_set_options", 00:03:57.782 "keyring_get_keys", 00:03:57.782 "vfu_tgt_set_base_path", 00:03:57.782 "framework_get_pci_devices", 00:03:57.782 "framework_get_config", 00:03:57.782 "framework_get_subsystems", 00:03:57.782 "fsdev_set_opts", 00:03:57.782 "fsdev_get_opts", 00:03:57.782 "trace_get_info", 00:03:57.782 "trace_get_tpoint_group_mask", 00:03:57.782 "trace_disable_tpoint_group", 00:03:57.782 "trace_enable_tpoint_group", 00:03:57.782 "trace_clear_tpoint_mask", 00:03:57.782 "trace_set_tpoint_mask", 00:03:57.782 "notify_get_notifications", 00:03:57.782 "notify_get_types", 00:03:57.782 "spdk_get_version", 00:03:57.782 "rpc_get_methods" 00:03:57.782 ] 00:03:58.043 12:46:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:58.043 12:46:55 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:58.043 12:46:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:58.043 12:46:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:58.043 12:46:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2133580 00:03:58.043 12:46:55 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 2133580 ']' 00:03:58.043 12:46:55 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 2133580 00:03:58.043 12:46:55 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:03:58.043 12:46:55 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:58.043 12:46:55 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2133580 00:03:58.043 12:46:55 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:58.043 12:46:55 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:58.043 12:46:55 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2133580' 00:03:58.043 killing process with pid 2133580 00:03:58.043 12:46:55 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 2133580 00:03:58.043 12:46:55 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 2133580 00:03:58.303 00:03:58.303 real 0m1.168s 00:03:58.303 user 0m1.959s 00:03:58.303 sys 0m0.448s 00:03:58.303 12:46:55 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:58.303 12:46:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:58.303 ************************************ 00:03:58.303 END TEST spdkcli_tcp 00:03:58.303 ************************************ 00:03:58.303 12:46:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:58.303 12:46:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:58.303 12:46:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:58.303 12:46:55 -- common/autotest_common.sh@10 -- # set +x 00:03:58.303 ************************************ 00:03:58.303 START TEST dpdk_mem_utility 00:03:58.303 ************************************ 00:03:58.303 12:46:55 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:58.563 * Looking for test storage... 00:03:58.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.563 12:46:56 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:58.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.563 --rc genhtml_branch_coverage=1 00:03:58.563 --rc genhtml_function_coverage=1 00:03:58.563 --rc genhtml_legend=1 00:03:58.563 --rc geninfo_all_blocks=1 00:03:58.563 --rc geninfo_unexecuted_blocks=1 00:03:58.563 00:03:58.563 ' 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:58.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.563 --rc genhtml_branch_coverage=1 00:03:58.563 --rc genhtml_function_coverage=1 00:03:58.563 --rc genhtml_legend=1 00:03:58.563 --rc geninfo_all_blocks=1 00:03:58.563 --rc geninfo_unexecuted_blocks=1 00:03:58.563 00:03:58.563 ' 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:58.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.563 --rc genhtml_branch_coverage=1 00:03:58.563 --rc genhtml_function_coverage=1 00:03:58.563 --rc genhtml_legend=1 00:03:58.563 --rc geninfo_all_blocks=1 00:03:58.563 --rc geninfo_unexecuted_blocks=1 00:03:58.563 00:03:58.563 ' 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:58.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.563 --rc genhtml_branch_coverage=1 00:03:58.563 --rc genhtml_function_coverage=1 00:03:58.563 --rc genhtml_legend=1 00:03:58.563 --rc geninfo_all_blocks=1 00:03:58.563 --rc geninfo_unexecuted_blocks=1 00:03:58.563 00:03:58.563 ' 00:03:58.563 12:46:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:58.563 12:46:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2133890 00:03:58.563 12:46:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2133890 00:03:58.563 12:46:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 2133890 ']' 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:58.563 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:58.563 [2024-11-18 12:46:56.172431] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:03:58.563 [2024-11-18 12:46:56.172481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2133890 ] 00:03:58.563 [2024-11-18 12:46:56.248605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.822 [2024-11-18 12:46:56.289637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.822 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:58.822 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:03:58.822 12:46:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:58.822 12:46:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:58.822 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.822 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:58.822 { 00:03:58.822 "filename": "/tmp/spdk_mem_dump.txt" 00:03:58.822 } 00:03:58.822 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.822 12:46:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:59.083 DPDK memory size 810.000000 MiB in 1 heap(s) 00:03:59.083 1 heaps totaling size 810.000000 MiB 00:03:59.083 size: 810.000000 MiB heap id: 0 00:03:59.083 end heaps---------- 00:03:59.083 9 mempools totaling size 595.772034 MiB 00:03:59.083 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:59.083 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:59.083 size: 92.545471 MiB name: bdev_io_2133890 00:03:59.083 size: 50.003479 MiB name: msgpool_2133890 00:03:59.083 size: 36.509338 MiB name: fsdev_io_2133890 00:03:59.083 size: 21.763794 MiB name: PDU_Pool 00:03:59.083 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:59.083 size: 4.133484 MiB name: evtpool_2133890 00:03:59.083 size: 0.026123 MiB name: Session_Pool 00:03:59.083 end mempools------- 00:03:59.083 6 memzones totaling size 4.142822 MiB 00:03:59.083 size: 1.000366 MiB name: RG_ring_0_2133890 00:03:59.083 size: 1.000366 MiB name: RG_ring_1_2133890 00:03:59.083 size: 1.000366 MiB name: RG_ring_4_2133890 00:03:59.083 size: 1.000366 MiB name: RG_ring_5_2133890 00:03:59.083 size: 0.125366 MiB name: RG_ring_2_2133890 00:03:59.083 size: 0.015991 MiB name: RG_ring_3_2133890 00:03:59.083 end memzones------- 00:03:59.083 12:46:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:03:59.083 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:03:59.083 list of free elements. size: 10.862488 MiB 00:03:59.083 element at address: 0x200018a00000 with size: 0.999878 MiB 00:03:59.083 element at address: 0x200018c00000 with size: 0.999878 MiB 00:03:59.083 element at address: 0x200000400000 with size: 0.998535 MiB 00:03:59.083 element at address: 0x200031800000 with size: 0.994446 MiB 00:03:59.083 element at address: 0x200006400000 with size: 0.959839 MiB 00:03:59.083 element at address: 0x200012c00000 with size: 0.954285 MiB 00:03:59.083 element at address: 0x200018e00000 with size: 0.936584 MiB 00:03:59.083 element at address: 0x200000200000 with size: 0.717346 MiB 00:03:59.083 element at address: 0x20001a600000 with size: 0.582886 MiB 00:03:59.083 element at address: 0x200000c00000 with size: 0.495422 MiB 00:03:59.083 element at address: 0x20000a600000 with size: 0.490723 MiB 00:03:59.083 element at address: 0x200019000000 with size: 0.485657 MiB 00:03:59.083 element at address: 0x200003e00000 with size: 0.481934 MiB 00:03:59.083 element at address: 0x200027a00000 with size: 0.410034 MiB 00:03:59.083 element at address: 0x200000800000 with size: 0.355042 MiB 00:03:59.083 list of standard malloc elements. size: 199.218628 MiB 00:03:59.083 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:03:59.083 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:03:59.083 element at address: 0x200018afff80 with size: 1.000122 MiB 00:03:59.083 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:03:59.083 element at address: 0x200018efff80 with size: 1.000122 MiB 00:03:59.083 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:59.083 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:03:59.083 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:59.083 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:03:59.084 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:59.084 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:59.084 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:03:59.084 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:03:59.084 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:03:59.084 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:03:59.084 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:03:59.084 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:03:59.084 element at address: 0x20000085b040 with size: 0.000183 MiB 00:03:59.084 element at address: 0x20000085f300 with size: 0.000183 MiB 00:03:59.084 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:03:59.084 element at address: 0x20000087f680 with size: 0.000183 MiB 00:03:59.084 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:03:59.084 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:03:59.084 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:03:59.084 element at address: 0x200000cff000 with size: 0.000183 MiB 00:03:59.084 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:03:59.084 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:03:59.084 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:03:59.084 element at address: 0x200003efb980 with size: 0.000183 MiB 00:03:59.084 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:03:59.084 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:03:59.084 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:03:59.084 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:03:59.084 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:03:59.084 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:03:59.084 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:03:59.084 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:03:59.084 element at address: 0x20001a695380 with size: 0.000183 MiB 00:03:59.084 element at address: 0x20001a695440 with size: 0.000183 MiB 00:03:59.084 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:03:59.084 element at address: 0x200027a69040 with size: 0.000183 MiB 00:03:59.084 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:03:59.084 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:03:59.084 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:03:59.084 list of memzone associated elements. size: 599.918884 MiB 00:03:59.084 element at address: 0x20001a695500 with size: 211.416748 MiB 00:03:59.084 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:59.084 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:03:59.084 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:59.084 element at address: 0x200012df4780 with size: 92.045044 MiB 00:03:59.084 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2133890_0 00:03:59.084 element at address: 0x200000dff380 with size: 48.003052 MiB 00:03:59.084 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2133890_0 00:03:59.084 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:03:59.084 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2133890_0 00:03:59.084 element at address: 0x2000191be940 with size: 20.255554 MiB 00:03:59.084 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:59.084 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:03:59.084 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:59.084 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:03:59.084 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2133890_0 00:03:59.084 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:03:59.084 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2133890 00:03:59.084 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:59.084 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2133890 00:03:59.084 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:03:59.084 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:59.084 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:03:59.084 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:59.084 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:03:59.084 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:59.084 element at address: 0x200003efba40 with size: 1.008118 MiB 00:03:59.084 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:59.084 element at address: 0x200000cff180 with size: 1.000488 MiB 00:03:59.084 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2133890 00:03:59.084 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:03:59.084 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2133890 00:03:59.084 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:03:59.084 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2133890 00:03:59.084 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:03:59.084 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2133890 00:03:59.084 element at address: 0x20000087f740 with size: 0.500488 MiB 00:03:59.084 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2133890 00:03:59.084 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:03:59.084 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2133890 00:03:59.084 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:03:59.084 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:59.084 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:03:59.084 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:59.084 element at address: 0x20001907c540 with size: 0.250488 MiB 00:03:59.084 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:59.084 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:03:59.084 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2133890 00:03:59.084 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:03:59.084 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2133890 00:03:59.084 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:03:59.084 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:59.084 element at address: 0x200027a69100 with size: 0.023743 MiB 00:03:59.084 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:59.084 element at address: 0x20000085b100 with size: 0.016113 MiB 00:03:59.084 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2133890 00:03:59.084 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:03:59.084 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:59.084 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:03:59.084 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2133890 00:03:59.084 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:03:59.084 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2133890 00:03:59.084 element at address: 0x20000085af00 with size: 0.000305 MiB 00:03:59.084 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2133890 00:03:59.084 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:03:59.084 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:59.084 12:46:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:59.084 12:46:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2133890 00:03:59.084 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 2133890 ']' 00:03:59.084 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 2133890 00:03:59.084 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:03:59.084 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:59.084 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2133890 00:03:59.084 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:59.084 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:59.084 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2133890' 00:03:59.084 killing process with pid 2133890 00:03:59.084 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 2133890 00:03:59.084 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 2133890 00:03:59.344 00:03:59.344 real 0m1.032s 00:03:59.344 user 0m0.978s 00:03:59.344 sys 0m0.412s 00:03:59.344 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:59.344 12:46:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:59.344 ************************************ 00:03:59.344 END TEST dpdk_mem_utility 00:03:59.344 ************************************ 00:03:59.344 12:46:57 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:59.344 12:46:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:59.344 12:46:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:59.344 12:46:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.605 ************************************ 00:03:59.605 START TEST event 00:03:59.605 ************************************ 00:03:59.605 12:46:57 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:59.605 * Looking for test storage... 00:03:59.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:59.605 12:46:57 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:59.605 12:46:57 event -- common/autotest_common.sh@1691 -- # lcov --version 00:03:59.605 12:46:57 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:59.605 12:46:57 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:59.605 12:46:57 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.605 12:46:57 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.605 12:46:57 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.605 12:46:57 event -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.605 12:46:57 event -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.605 12:46:57 event -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.605 12:46:57 event -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.605 12:46:57 event -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.605 12:46:57 event -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.605 12:46:57 event -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.605 12:46:57 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.605 12:46:57 event -- scripts/common.sh@344 -- # case "$op" in 00:03:59.605 12:46:57 event -- scripts/common.sh@345 -- # : 1 00:03:59.605 12:46:57 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.605 12:46:57 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.605 12:46:57 event -- scripts/common.sh@365 -- # decimal 1 00:03:59.605 12:46:57 event -- scripts/common.sh@353 -- # local d=1 00:03:59.605 12:46:57 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.605 12:46:57 event -- scripts/common.sh@355 -- # echo 1 00:03:59.605 12:46:57 event -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.605 12:46:57 event -- scripts/common.sh@366 -- # decimal 2 00:03:59.605 12:46:57 event -- scripts/common.sh@353 -- # local d=2 00:03:59.605 12:46:57 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.605 12:46:57 event -- scripts/common.sh@355 -- # echo 2 00:03:59.605 12:46:57 event -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.605 12:46:57 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.605 12:46:57 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.605 12:46:57 event -- scripts/common.sh@368 -- # return 0 00:03:59.605 12:46:57 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.605 12:46:57 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:59.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.605 --rc genhtml_branch_coverage=1 00:03:59.605 --rc genhtml_function_coverage=1 00:03:59.605 --rc genhtml_legend=1 00:03:59.605 --rc geninfo_all_blocks=1 00:03:59.605 --rc geninfo_unexecuted_blocks=1 00:03:59.605 00:03:59.605 ' 00:03:59.605 12:46:57 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:59.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.605 --rc genhtml_branch_coverage=1 00:03:59.605 --rc genhtml_function_coverage=1 00:03:59.605 --rc genhtml_legend=1 00:03:59.605 --rc geninfo_all_blocks=1 00:03:59.605 --rc geninfo_unexecuted_blocks=1 00:03:59.605 00:03:59.605 ' 00:03:59.605 12:46:57 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:59.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.605 --rc genhtml_branch_coverage=1 00:03:59.605 --rc genhtml_function_coverage=1 00:03:59.605 --rc genhtml_legend=1 00:03:59.605 --rc geninfo_all_blocks=1 00:03:59.605 --rc geninfo_unexecuted_blocks=1 00:03:59.605 00:03:59.605 ' 00:03:59.605 12:46:57 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:59.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.605 --rc genhtml_branch_coverage=1 00:03:59.605 --rc genhtml_function_coverage=1 00:03:59.605 --rc genhtml_legend=1 00:03:59.605 --rc geninfo_all_blocks=1 00:03:59.605 --rc geninfo_unexecuted_blocks=1 00:03:59.605 00:03:59.605 ' 00:03:59.605 12:46:57 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:03:59.605 12:46:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:03:59.605 12:46:57 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:59.605 12:46:57 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:03:59.605 12:46:57 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:59.605 12:46:57 event -- common/autotest_common.sh@10 -- # set +x 00:03:59.605 ************************************ 00:03:59.605 START TEST event_perf 00:03:59.605 ************************************ 00:03:59.605 12:46:57 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:59.605 Running I/O for 1 seconds...[2024-11-18 12:46:57.283913] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:03:59.605 [2024-11-18 12:46:57.283981] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134180 ] 00:03:59.866 [2024-11-18 12:46:57.362524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:59.866 [2024-11-18 12:46:57.406483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:59.866 [2024-11-18 12:46:57.406592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:59.866 [2024-11-18 12:46:57.406702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.866 [2024-11-18 12:46:57.406703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:00.807 Running I/O for 1 seconds... 00:04:00.807 lcore 0: 207552 00:04:00.807 lcore 1: 207551 00:04:00.807 lcore 2: 207550 00:04:00.807 lcore 3: 207551 00:04:00.807 done. 00:04:00.807 00:04:00.807 real 0m1.184s 00:04:00.807 user 0m4.092s 00:04:00.807 sys 0m0.088s 00:04:00.807 12:46:58 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:00.807 12:46:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:00.807 ************************************ 00:04:00.807 END TEST event_perf 00:04:00.807 ************************************ 00:04:00.807 12:46:58 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:00.807 12:46:58 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:00.807 12:46:58 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:00.807 12:46:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:01.067 ************************************ 00:04:01.067 START TEST event_reactor 00:04:01.067 ************************************ 00:04:01.067 12:46:58 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:01.067 [2024-11-18 12:46:58.539446] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:01.067 [2024-11-18 12:46:58.539514] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134435 ] 00:04:01.067 [2024-11-18 12:46:58.617080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.067 [2024-11-18 12:46:58.657574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.006 test_start 00:04:02.006 oneshot 00:04:02.006 tick 100 00:04:02.006 tick 100 00:04:02.006 tick 250 00:04:02.006 tick 100 00:04:02.006 tick 100 00:04:02.006 tick 100 00:04:02.006 tick 250 00:04:02.006 tick 500 00:04:02.006 tick 100 00:04:02.006 tick 100 00:04:02.006 tick 250 00:04:02.006 tick 100 00:04:02.006 tick 100 00:04:02.006 test_end 00:04:02.006 00:04:02.006 real 0m1.178s 00:04:02.006 user 0m1.098s 00:04:02.006 sys 0m0.075s 00:04:02.006 12:46:59 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:02.006 12:46:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:02.006 ************************************ 00:04:02.006 END TEST event_reactor 00:04:02.006 ************************************ 00:04:02.266 12:46:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:02.267 12:46:59 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:02.267 12:46:59 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:02.267 12:46:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:02.267 ************************************ 00:04:02.267 START TEST event_reactor_perf 00:04:02.267 ************************************ 00:04:02.267 12:46:59 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:02.267 [2024-11-18 12:46:59.789755] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:02.267 [2024-11-18 12:46:59.789822] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134682 ] 00:04:02.267 [2024-11-18 12:46:59.870598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.267 [2024-11-18 12:46:59.911080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.649 test_start 00:04:03.649 test_end 00:04:03.649 Performance: 478853 events per second 00:04:03.649 00:04:03.649 real 0m1.185s 00:04:03.649 user 0m1.100s 00:04:03.649 sys 0m0.081s 00:04:03.649 12:47:00 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:03.649 12:47:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:03.649 ************************************ 00:04:03.649 END TEST event_reactor_perf 00:04:03.649 ************************************ 00:04:03.649 12:47:00 event -- event/event.sh@49 -- # uname -s 00:04:03.649 12:47:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:03.649 12:47:00 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:03.649 12:47:00 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:03.649 12:47:00 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:03.649 12:47:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:03.649 ************************************ 00:04:03.649 START TEST event_scheduler 00:04:03.649 ************************************ 00:04:03.649 12:47:01 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:03.649 * Looking for test storage... 00:04:03.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:03.649 12:47:01 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:03.649 12:47:01 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:03.649 12:47:01 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:03.649 12:47:01 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.649 12:47:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:03.649 12:47:01 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.649 12:47:01 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:03.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.650 --rc genhtml_branch_coverage=1 00:04:03.650 --rc genhtml_function_coverage=1 00:04:03.650 --rc genhtml_legend=1 00:04:03.650 --rc geninfo_all_blocks=1 00:04:03.650 --rc geninfo_unexecuted_blocks=1 00:04:03.650 00:04:03.650 ' 00:04:03.650 12:47:01 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:03.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.650 --rc genhtml_branch_coverage=1 00:04:03.650 --rc genhtml_function_coverage=1 00:04:03.650 --rc genhtml_legend=1 00:04:03.650 --rc geninfo_all_blocks=1 00:04:03.650 --rc geninfo_unexecuted_blocks=1 00:04:03.650 00:04:03.650 ' 00:04:03.650 12:47:01 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:03.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.650 --rc genhtml_branch_coverage=1 00:04:03.650 --rc genhtml_function_coverage=1 00:04:03.650 --rc genhtml_legend=1 00:04:03.650 --rc geninfo_all_blocks=1 00:04:03.650 --rc geninfo_unexecuted_blocks=1 00:04:03.650 00:04:03.650 ' 00:04:03.650 12:47:01 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:03.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.650 --rc genhtml_branch_coverage=1 00:04:03.650 --rc genhtml_function_coverage=1 00:04:03.650 --rc genhtml_legend=1 00:04:03.650 --rc geninfo_all_blocks=1 00:04:03.650 --rc geninfo_unexecuted_blocks=1 00:04:03.650 00:04:03.650 ' 00:04:03.650 12:47:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:03.650 12:47:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2134970 00:04:03.650 12:47:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.650 12:47:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:03.650 12:47:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2134970 00:04:03.650 12:47:01 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 2134970 ']' 00:04:03.650 12:47:01 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.650 12:47:01 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:03.650 12:47:01 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.650 12:47:01 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:03.650 12:47:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:03.650 [2024-11-18 12:47:01.254989] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:03.650 [2024-11-18 12:47:01.255039] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134970 ] 00:04:03.650 [2024-11-18 12:47:01.329682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:03.910 [2024-11-18 12:47:01.374327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.910 [2024-11-18 12:47:01.374451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.910 [2024-11-18 12:47:01.374490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:03.910 [2024-11-18 12:47:01.374490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:03.910 12:47:01 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:03.910 12:47:01 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:03.910 12:47:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:03.910 12:47:01 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.910 12:47:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:03.910 [2024-11-18 12:47:01.419303] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:03.910 [2024-11-18 12:47:01.419320] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:03.910 [2024-11-18 12:47:01.419330] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:03.910 [2024-11-18 12:47:01.419335] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:03.910 [2024-11-18 12:47:01.419340] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:03.910 12:47:01 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.910 12:47:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:03.910 12:47:01 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.910 12:47:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:03.910 [2024-11-18 12:47:01.493309] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:03.910 12:47:01 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.911 12:47:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:03.911 12:47:01 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:03.911 12:47:01 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:03.911 12:47:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:03.911 ************************************ 00:04:03.911 START TEST scheduler_create_thread 00:04:03.911 ************************************ 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:03.911 2 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:03.911 3 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:03.911 4 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:03.911 5 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:03.911 6 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:03.911 7 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:03.911 8 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:03.911 9 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.911 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:04.171 10 00:04:04.171 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:04.171 12:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:04.171 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:04.171 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:04.171 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:04.171 12:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:04.171 12:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:04.171 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:04.171 12:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:04.431 12:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:04.431 12:47:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:04.431 12:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:04.431 12:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:06.342 12:47:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.342 12:47:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:06.342 12:47:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:06.342 12:47:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.342 12:47:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.282 12:47:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.282 00:04:07.282 real 0m3.102s 00:04:07.282 user 0m0.023s 00:04:07.282 sys 0m0.006s 00:04:07.282 12:47:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.282 12:47:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.282 ************************************ 00:04:07.282 END TEST scheduler_create_thread 00:04:07.282 ************************************ 00:04:07.282 12:47:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:07.282 12:47:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2134970 00:04:07.282 12:47:04 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 2134970 ']' 00:04:07.282 12:47:04 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 2134970 00:04:07.282 12:47:04 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:07.282 12:47:04 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:07.282 12:47:04 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2134970 00:04:07.282 12:47:04 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:07.282 12:47:04 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:07.282 12:47:04 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2134970' 00:04:07.282 killing process with pid 2134970 00:04:07.282 12:47:04 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 2134970 00:04:07.282 12:47:04 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 2134970 00:04:07.542 [2024-11-18 12:47:05.008628] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:07.542 00:04:07.542 real 0m4.162s 00:04:07.542 user 0m6.644s 00:04:07.542 sys 0m0.382s 00:04:07.542 12:47:05 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.542 12:47:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.542 ************************************ 00:04:07.542 END TEST event_scheduler 00:04:07.542 ************************************ 00:04:07.542 12:47:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:07.542 12:47:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:07.542 12:47:05 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:07.542 12:47:05 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:07.542 12:47:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.802 ************************************ 00:04:07.802 START TEST app_repeat 00:04:07.802 ************************************ 00:04:07.802 12:47:05 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:07.802 12:47:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.802 12:47:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.802 12:47:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:07.802 12:47:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:07.802 12:47:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:07.802 12:47:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:07.802 12:47:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:07.802 12:47:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2135712 00:04:07.802 12:47:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.802 12:47:05 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:07.803 12:47:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2135712' 00:04:07.803 Process app_repeat pid: 2135712 00:04:07.803 12:47:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:07.803 12:47:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:07.803 spdk_app_start Round 0 00:04:07.803 12:47:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2135712 /var/tmp/spdk-nbd.sock 00:04:07.803 12:47:05 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2135712 ']' 00:04:07.803 12:47:05 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:07.803 12:47:05 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:07.803 12:47:05 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:07.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:07.803 12:47:05 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:07.803 12:47:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:07.803 [2024-11-18 12:47:05.304520] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:07.803 [2024-11-18 12:47:05.304569] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135712 ] 00:04:07.803 [2024-11-18 12:47:05.378342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:07.803 [2024-11-18 12:47:05.423051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.803 [2024-11-18 12:47:05.423053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.063 12:47:05 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:08.063 12:47:05 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:08.063 12:47:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:08.063 Malloc0 00:04:08.063 12:47:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:08.323 Malloc1 00:04:08.323 12:47:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:08.323 12:47:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:08.583 /dev/nbd0 00:04:08.584 12:47:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:08.584 12:47:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:08.584 12:47:06 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:08.584 12:47:06 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:08.584 12:47:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:08.584 12:47:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:08.584 12:47:06 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:08.584 12:47:06 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:08.584 12:47:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:08.584 12:47:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:08.584 12:47:06 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:08.584 1+0 records in 00:04:08.584 1+0 records out 00:04:08.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177985 s, 23.0 MB/s 00:04:08.584 12:47:06 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:08.584 12:47:06 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:08.584 12:47:06 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:08.584 12:47:06 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:08.584 12:47:06 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:08.584 12:47:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:08.584 12:47:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:08.584 12:47:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:08.844 /dev/nbd1 00:04:08.844 12:47:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:08.844 12:47:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:08.844 12:47:06 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:08.844 12:47:06 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:08.844 12:47:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:08.844 12:47:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:08.844 12:47:06 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:08.844 12:47:06 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:08.844 12:47:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:08.844 12:47:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:08.844 12:47:06 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:08.844 1+0 records in 00:04:08.844 1+0 records out 00:04:08.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189709 s, 21.6 MB/s 00:04:08.844 12:47:06 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:08.844 12:47:06 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:08.844 12:47:06 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:08.844 12:47:06 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:08.844 12:47:06 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:08.844 12:47:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:08.844 12:47:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:08.844 12:47:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:08.844 12:47:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:08.844 12:47:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:09.105 { 00:04:09.105 "nbd_device": "/dev/nbd0", 00:04:09.105 "bdev_name": "Malloc0" 00:04:09.105 }, 00:04:09.105 { 00:04:09.105 "nbd_device": "/dev/nbd1", 00:04:09.105 "bdev_name": "Malloc1" 00:04:09.105 } 00:04:09.105 ]' 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:09.105 { 00:04:09.105 "nbd_device": "/dev/nbd0", 00:04:09.105 "bdev_name": "Malloc0" 00:04:09.105 }, 00:04:09.105 { 00:04:09.105 "nbd_device": "/dev/nbd1", 00:04:09.105 "bdev_name": "Malloc1" 00:04:09.105 } 00:04:09.105 ]' 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:09.105 /dev/nbd1' 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:09.105 /dev/nbd1' 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:09.105 256+0 records in 00:04:09.105 256+0 records out 00:04:09.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106372 s, 98.6 MB/s 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:09.105 256+0 records in 00:04:09.105 256+0 records out 00:04:09.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142188 s, 73.7 MB/s 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:09.105 256+0 records in 00:04:09.105 256+0 records out 00:04:09.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015723 s, 66.7 MB/s 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:09.105 12:47:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:09.365 12:47:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:09.365 12:47:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:09.365 12:47:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:09.365 12:47:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:09.365 12:47:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:09.365 12:47:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:09.365 12:47:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:09.365 12:47:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:09.365 12:47:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:09.365 12:47:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:09.626 12:47:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:09.626 12:47:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:09.626 12:47:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:09.626 12:47:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:09.626 12:47:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:09.626 12:47:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:09.626 12:47:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:09.626 12:47:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:09.626 12:47:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:09.626 12:47:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.626 12:47:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:09.885 12:47:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:09.885 12:47:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:09.885 12:47:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:09.885 12:47:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:09.885 12:47:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:09.885 12:47:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:09.885 12:47:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:09.885 12:47:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:09.885 12:47:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:09.885 12:47:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:09.886 12:47:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:09.886 12:47:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:09.886 12:47:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:10.146 12:47:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:10.146 [2024-11-18 12:47:07.806856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:10.146 [2024-11-18 12:47:07.844310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.146 [2024-11-18 12:47:07.844311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.406 [2024-11-18 12:47:07.885422] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:10.406 [2024-11-18 12:47:07.885468] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:13.699 12:47:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:13.699 12:47:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:13.699 spdk_app_start Round 1 00:04:13.699 12:47:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2135712 /var/tmp/spdk-nbd.sock 00:04:13.699 12:47:10 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2135712 ']' 00:04:13.699 12:47:10 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:13.699 12:47:10 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:13.699 12:47:10 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:13.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:13.699 12:47:10 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:13.699 12:47:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:13.699 12:47:10 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:13.699 12:47:10 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:13.699 12:47:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:13.699 Malloc0 00:04:13.699 12:47:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:13.699 Malloc1 00:04:13.699 12:47:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.699 12:47:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:13.960 /dev/nbd0 00:04:13.960 12:47:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:13.960 12:47:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:13.960 12:47:11 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:13.960 12:47:11 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:13.960 12:47:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:13.960 12:47:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:13.960 12:47:11 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:13.960 12:47:11 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:13.960 12:47:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:13.960 12:47:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:13.960 12:47:11 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:13.960 1+0 records in 00:04:13.960 1+0 records out 00:04:13.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209093 s, 19.6 MB/s 00:04:13.960 12:47:11 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.960 12:47:11 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:13.960 12:47:11 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.960 12:47:11 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:13.960 12:47:11 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:13.960 12:47:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:13.960 12:47:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.960 12:47:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:14.220 /dev/nbd1 00:04:14.220 12:47:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:14.220 12:47:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:14.220 12:47:11 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:14.220 12:47:11 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:14.220 12:47:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:14.220 12:47:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:14.220 12:47:11 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:14.220 12:47:11 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:14.220 12:47:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:14.220 12:47:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:14.220 12:47:11 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:14.220 1+0 records in 00:04:14.220 1+0 records out 00:04:14.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019729 s, 20.8 MB/s 00:04:14.220 12:47:11 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.220 12:47:11 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:14.220 12:47:11 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.220 12:47:11 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:14.220 12:47:11 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:14.220 12:47:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:14.220 12:47:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:14.220 12:47:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:14.220 12:47:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.220 12:47:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:14.480 12:47:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:14.480 { 00:04:14.480 "nbd_device": "/dev/nbd0", 00:04:14.480 "bdev_name": "Malloc0" 00:04:14.480 }, 00:04:14.480 { 00:04:14.480 "nbd_device": "/dev/nbd1", 00:04:14.480 "bdev_name": "Malloc1" 00:04:14.480 } 00:04:14.480 ]' 00:04:14.480 12:47:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:14.480 { 00:04:14.480 "nbd_device": "/dev/nbd0", 00:04:14.480 "bdev_name": "Malloc0" 00:04:14.480 }, 00:04:14.480 { 00:04:14.480 "nbd_device": "/dev/nbd1", 00:04:14.480 "bdev_name": "Malloc1" 00:04:14.480 } 00:04:14.480 ]' 00:04:14.480 12:47:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:14.480 /dev/nbd1' 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:14.480 /dev/nbd1' 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:14.480 256+0 records in 00:04:14.480 256+0 records out 00:04:14.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104512 s, 100 MB/s 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:14.480 256+0 records in 00:04:14.480 256+0 records out 00:04:14.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146697 s, 71.5 MB/s 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:14.480 256+0 records in 00:04:14.480 256+0 records out 00:04:14.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156341 s, 67.1 MB/s 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.480 12:47:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:14.481 12:47:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:14.481 12:47:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:14.481 12:47:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:14.740 12:47:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:14.740 12:47:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:14.740 12:47:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:14.740 12:47:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:14.740 12:47:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:14.740 12:47:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:14.740 12:47:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:14.740 12:47:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:14.740 12:47:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:14.740 12:47:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:14.999 12:47:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:14.999 12:47:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:14.999 12:47:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:14.999 12:47:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:14.999 12:47:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:14.999 12:47:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:14.999 12:47:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:14.999 12:47:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:14.999 12:47:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:14.999 12:47:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.999 12:47:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:15.259 12:47:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:15.259 12:47:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:15.259 12:47:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:15.259 12:47:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:15.259 12:47:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:15.259 12:47:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:15.259 12:47:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:15.259 12:47:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:15.259 12:47:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:15.259 12:47:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:15.259 12:47:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:15.259 12:47:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:15.259 12:47:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:15.518 12:47:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:15.518 [2024-11-18 12:47:13.145140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:15.518 [2024-11-18 12:47:13.182918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.518 [2024-11-18 12:47:13.182919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.778 [2024-11-18 12:47:13.224756] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:15.778 [2024-11-18 12:47:13.224789] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:18.316 12:47:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:18.316 12:47:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:18.316 spdk_app_start Round 2 00:04:18.316 12:47:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2135712 /var/tmp/spdk-nbd.sock 00:04:18.316 12:47:16 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2135712 ']' 00:04:18.316 12:47:16 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:18.316 12:47:16 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:18.316 12:47:16 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:18.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:18.316 12:47:16 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:18.316 12:47:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:18.575 12:47:16 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:18.575 12:47:16 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:18.575 12:47:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.835 Malloc0 00:04:18.835 12:47:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.096 Malloc1 00:04:19.096 12:47:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.096 12:47:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:19.355 /dev/nbd0 00:04:19.355 12:47:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:19.355 12:47:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:19.355 12:47:16 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:19.355 12:47:16 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:19.355 12:47:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:19.355 12:47:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:19.355 12:47:16 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:19.355 12:47:16 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:19.355 12:47:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:19.355 12:47:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:19.355 12:47:16 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.356 1+0 records in 00:04:19.356 1+0 records out 00:04:19.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374635 s, 10.9 MB/s 00:04:19.356 12:47:16 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.356 12:47:16 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:19.356 12:47:16 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.356 12:47:16 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:19.356 12:47:16 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:19.356 12:47:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.356 12:47:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.356 12:47:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:19.616 /dev/nbd1 00:04:19.616 12:47:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:19.616 12:47:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:19.616 12:47:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:19.616 12:47:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:19.616 12:47:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:19.616 12:47:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:19.616 12:47:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:19.616 12:47:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:19.616 12:47:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:19.616 12:47:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:19.616 12:47:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.616 1+0 records in 00:04:19.616 1+0 records out 00:04:19.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170355 s, 24.0 MB/s 00:04:19.616 12:47:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.616 12:47:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:19.616 12:47:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.616 12:47:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:19.616 12:47:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:19.616 12:47:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.616 12:47:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.616 12:47:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.616 12:47:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.616 12:47:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:19.876 { 00:04:19.876 "nbd_device": "/dev/nbd0", 00:04:19.876 "bdev_name": "Malloc0" 00:04:19.876 }, 00:04:19.876 { 00:04:19.876 "nbd_device": "/dev/nbd1", 00:04:19.876 "bdev_name": "Malloc1" 00:04:19.876 } 00:04:19.876 ]' 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:19.876 { 00:04:19.876 "nbd_device": "/dev/nbd0", 00:04:19.876 "bdev_name": "Malloc0" 00:04:19.876 }, 00:04:19.876 { 00:04:19.876 "nbd_device": "/dev/nbd1", 00:04:19.876 "bdev_name": "Malloc1" 00:04:19.876 } 00:04:19.876 ]' 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:19.876 /dev/nbd1' 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:19.876 /dev/nbd1' 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:19.876 256+0 records in 00:04:19.876 256+0 records out 00:04:19.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103211 s, 102 MB/s 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:19.876 256+0 records in 00:04:19.876 256+0 records out 00:04:19.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143996 s, 72.8 MB/s 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:19.876 256+0 records in 00:04:19.876 256+0 records out 00:04:19.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152321 s, 68.8 MB/s 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.876 12:47:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:20.136 12:47:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:20.136 12:47:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:20.136 12:47:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:20.136 12:47:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.136 12:47:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.136 12:47:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:20.136 12:47:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.136 12:47:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.136 12:47:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.136 12:47:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:20.396 12:47:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:20.396 12:47:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:20.396 12:47:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:20.396 12:47:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.396 12:47:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.396 12:47:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:20.396 12:47:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.396 12:47:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.396 12:47:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.396 12:47:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.396 12:47:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.396 12:47:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:20.396 12:47:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:20.396 12:47:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.657 12:47:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:20.657 12:47:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:20.657 12:47:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.657 12:47:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:20.657 12:47:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:20.657 12:47:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:20.657 12:47:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:20.657 12:47:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:20.657 12:47:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:20.657 12:47:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:20.657 12:47:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:20.917 [2024-11-18 12:47:18.475074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:20.917 [2024-11-18 12:47:18.512599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.917 [2024-11-18 12:47:18.512600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.917 [2024-11-18 12:47:18.553939] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:20.917 [2024-11-18 12:47:18.553979] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:24.212 12:47:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2135712 /var/tmp/spdk-nbd.sock 00:04:24.212 12:47:21 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2135712 ']' 00:04:24.212 12:47:21 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.212 12:47:21 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:24.212 12:47:21 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.212 12:47:21 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:24.212 12:47:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.212 12:47:21 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:24.212 12:47:21 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:24.212 12:47:21 event.app_repeat -- event/event.sh@39 -- # killprocess 2135712 00:04:24.212 12:47:21 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 2135712 ']' 00:04:24.212 12:47:21 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 2135712 00:04:24.213 12:47:21 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:04:24.213 12:47:21 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:24.213 12:47:21 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2135712 00:04:24.213 12:47:21 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:24.213 12:47:21 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:24.213 12:47:21 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2135712' 00:04:24.213 killing process with pid 2135712 00:04:24.213 12:47:21 event.app_repeat -- common/autotest_common.sh@971 -- # kill 2135712 00:04:24.213 12:47:21 event.app_repeat -- common/autotest_common.sh@976 -- # wait 2135712 00:04:24.213 spdk_app_start is called in Round 0. 00:04:24.213 Shutdown signal received, stop current app iteration 00:04:24.213 Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 reinitialization... 00:04:24.213 spdk_app_start is called in Round 1. 00:04:24.213 Shutdown signal received, stop current app iteration 00:04:24.213 Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 reinitialization... 00:04:24.213 spdk_app_start is called in Round 2. 00:04:24.213 Shutdown signal received, stop current app iteration 00:04:24.213 Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 reinitialization... 00:04:24.213 spdk_app_start is called in Round 3. 00:04:24.213 Shutdown signal received, stop current app iteration 00:04:24.213 12:47:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:24.213 12:47:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:24.213 00:04:24.213 real 0m16.466s 00:04:24.213 user 0m36.205s 00:04:24.213 sys 0m2.583s 00:04:24.213 12:47:21 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.213 12:47:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.213 ************************************ 00:04:24.213 END TEST app_repeat 00:04:24.213 ************************************ 00:04:24.213 12:47:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:24.213 12:47:21 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:24.213 12:47:21 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.213 12:47:21 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.213 12:47:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.213 ************************************ 00:04:24.213 START TEST cpu_locks 00:04:24.213 ************************************ 00:04:24.213 12:47:21 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:24.213 * Looking for test storage... 00:04:24.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:24.213 12:47:21 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:24.213 12:47:21 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:24.213 12:47:21 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:24.473 12:47:21 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:24.473 12:47:21 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.473 12:47:21 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.474 12:47:21 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:24.474 12:47:21 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.474 12:47:21 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:24.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.474 --rc genhtml_branch_coverage=1 00:04:24.474 --rc genhtml_function_coverage=1 00:04:24.474 --rc genhtml_legend=1 00:04:24.474 --rc geninfo_all_blocks=1 00:04:24.474 --rc geninfo_unexecuted_blocks=1 00:04:24.474 00:04:24.474 ' 00:04:24.474 12:47:21 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:24.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.474 --rc genhtml_branch_coverage=1 00:04:24.474 --rc genhtml_function_coverage=1 00:04:24.474 --rc genhtml_legend=1 00:04:24.474 --rc geninfo_all_blocks=1 00:04:24.474 --rc geninfo_unexecuted_blocks=1 00:04:24.474 00:04:24.474 ' 00:04:24.474 12:47:21 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:24.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.474 --rc genhtml_branch_coverage=1 00:04:24.474 --rc genhtml_function_coverage=1 00:04:24.474 --rc genhtml_legend=1 00:04:24.474 --rc geninfo_all_blocks=1 00:04:24.474 --rc geninfo_unexecuted_blocks=1 00:04:24.474 00:04:24.474 ' 00:04:24.474 12:47:21 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:24.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.474 --rc genhtml_branch_coverage=1 00:04:24.474 --rc genhtml_function_coverage=1 00:04:24.474 --rc genhtml_legend=1 00:04:24.474 --rc geninfo_all_blocks=1 00:04:24.474 --rc geninfo_unexecuted_blocks=1 00:04:24.474 00:04:24.474 ' 00:04:24.474 12:47:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:24.474 12:47:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:24.474 12:47:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:24.474 12:47:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:24.474 12:47:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.474 12:47:21 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.474 12:47:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:24.474 ************************************ 00:04:24.474 START TEST default_locks 00:04:24.474 ************************************ 00:04:24.474 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:04:24.474 12:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2138708 00:04:24.474 12:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2138708 00:04:24.474 12:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:24.474 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2138708 ']' 00:04:24.474 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.474 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:24.474 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.474 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:24.474 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:24.474 [2024-11-18 12:47:22.067286] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:24.474 [2024-11-18 12:47:22.067328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138708 ] 00:04:24.474 [2024-11-18 12:47:22.140941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.734 [2024-11-18 12:47:22.183614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.734 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:24.734 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:04:24.734 12:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2138708 00:04:24.734 12:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2138708 00:04:24.734 12:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:25.303 lslocks: write error 00:04:25.303 12:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2138708 00:04:25.303 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 2138708 ']' 00:04:25.303 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 2138708 00:04:25.303 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:04:25.303 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:25.303 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2138708 00:04:25.303 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:25.303 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:25.303 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2138708' 00:04:25.303 killing process with pid 2138708 00:04:25.303 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 2138708 00:04:25.303 12:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 2138708 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2138708 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2138708 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2138708 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2138708 ']' 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:25.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2138708) - No such process 00:04:25.563 ERROR: process (pid: 2138708) is no longer running 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:25.563 00:04:25.563 real 0m1.180s 00:04:25.563 user 0m1.139s 00:04:25.563 sys 0m0.529s 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:25.563 12:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:25.563 ************************************ 00:04:25.563 END TEST default_locks 00:04:25.563 ************************************ 00:04:25.563 12:47:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:25.563 12:47:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:25.563 12:47:23 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:25.564 12:47:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:25.564 ************************************ 00:04:25.564 START TEST default_locks_via_rpc 00:04:25.564 ************************************ 00:04:25.824 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:04:25.824 12:47:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2138967 00:04:25.824 12:47:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2138967 00:04:25.824 12:47:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.824 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2138967 ']' 00:04:25.824 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.824 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:25.824 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.824 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:25.824 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.824 [2024-11-18 12:47:23.312754] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:25.824 [2024-11-18 12:47:23.312795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138967 ] 00:04:25.824 [2024-11-18 12:47:23.386126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.824 [2024-11-18 12:47:23.425759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2138967 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2138967 00:04:26.084 12:47:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:26.655 12:47:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2138967 00:04:26.655 12:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 2138967 ']' 00:04:26.655 12:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 2138967 00:04:26.655 12:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:04:26.655 12:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:26.655 12:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2138967 00:04:26.655 12:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:26.655 12:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:26.655 12:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2138967' 00:04:26.655 killing process with pid 2138967 00:04:26.655 12:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 2138967 00:04:26.655 12:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 2138967 00:04:26.915 00:04:26.915 real 0m1.193s 00:04:26.915 user 0m1.148s 00:04:26.915 sys 0m0.539s 00:04:26.915 12:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.915 12:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.915 ************************************ 00:04:26.915 END TEST default_locks_via_rpc 00:04:26.915 ************************************ 00:04:26.915 12:47:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:26.915 12:47:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.915 12:47:24 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.915 12:47:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:26.915 ************************************ 00:04:26.915 START TEST non_locking_app_on_locked_coremask 00:04:26.915 ************************************ 00:04:26.915 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:04:26.915 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.915 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2139221 00:04:26.915 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2139221 /var/tmp/spdk.sock 00:04:26.915 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2139221 ']' 00:04:26.915 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.915 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:26.915 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.915 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:26.915 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:26.915 [2024-11-18 12:47:24.569614] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:26.915 [2024-11-18 12:47:24.569654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139221 ] 00:04:27.175 [2024-11-18 12:47:24.646077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.175 [2024-11-18 12:47:24.688673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.436 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:27.436 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:27.436 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2139234 00:04:27.436 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2139234 /var/tmp/spdk2.sock 00:04:27.436 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:27.436 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2139234 ']' 00:04:27.436 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:27.436 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:27.436 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:27.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:27.436 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:27.436 12:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:27.436 [2024-11-18 12:47:24.952883] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:27.436 [2024-11-18 12:47:24.952932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139234 ] 00:04:27.436 [2024-11-18 12:47:25.045798] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:27.436 [2024-11-18 12:47:25.045825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.436 [2024-11-18 12:47:25.134938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.407 12:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:28.407 12:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:28.407 12:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2139221 00:04:28.407 12:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2139221 00:04:28.407 12:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:28.407 lslocks: write error 00:04:28.407 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2139221 00:04:28.407 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2139221 ']' 00:04:28.407 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2139221 00:04:28.668 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:28.668 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:28.668 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2139221 00:04:28.668 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:28.668 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:28.668 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2139221' 00:04:28.668 killing process with pid 2139221 00:04:28.668 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2139221 00:04:28.668 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2139221 00:04:29.238 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2139234 00:04:29.238 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2139234 ']' 00:04:29.238 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2139234 00:04:29.238 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:29.239 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:29.239 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2139234 00:04:29.239 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:29.239 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:29.239 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2139234' 00:04:29.239 killing process with pid 2139234 00:04:29.239 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2139234 00:04:29.239 12:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2139234 00:04:29.499 00:04:29.499 real 0m2.589s 00:04:29.499 user 0m2.746s 00:04:29.499 sys 0m0.835s 00:04:29.499 12:47:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.499 12:47:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:29.499 ************************************ 00:04:29.499 END TEST non_locking_app_on_locked_coremask 00:04:29.499 ************************************ 00:04:29.499 12:47:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:29.499 12:47:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.499 12:47:27 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.499 12:47:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.499 ************************************ 00:04:29.499 START TEST locking_app_on_unlocked_coremask 00:04:29.499 ************************************ 00:04:29.499 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:04:29.499 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2139722 00:04:29.499 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2139722 /var/tmp/spdk.sock 00:04:29.499 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:29.499 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2139722 ']' 00:04:29.499 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.499 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:29.499 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.499 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:29.499 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:29.759 [2024-11-18 12:47:27.234849] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:29.759 [2024-11-18 12:47:27.234889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139722 ] 00:04:29.759 [2024-11-18 12:47:27.308253] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:29.759 [2024-11-18 12:47:27.308278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.759 [2024-11-18 12:47:27.350674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.019 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:30.019 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:30.019 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2139729 00:04:30.019 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2139729 /var/tmp/spdk2.sock 00:04:30.019 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:30.019 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2139729 ']' 00:04:30.019 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:30.019 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:30.019 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:30.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:30.019 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:30.019 12:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:30.019 [2024-11-18 12:47:27.621022] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:30.019 [2024-11-18 12:47:27.621069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139729 ] 00:04:30.019 [2024-11-18 12:47:27.710019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.279 [2024-11-18 12:47:27.798928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.848 12:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:30.848 12:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:30.848 12:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2139729 00:04:30.848 12:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:30.848 12:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2139729 00:04:31.418 lslocks: write error 00:04:31.418 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2139722 00:04:31.418 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2139722 ']' 00:04:31.418 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2139722 00:04:31.418 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:31.418 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:31.418 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2139722 00:04:31.418 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:31.418 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:31.418 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2139722' 00:04:31.418 killing process with pid 2139722 00:04:31.418 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2139722 00:04:31.418 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2139722 00:04:32.357 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2139729 00:04:32.357 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2139729 ']' 00:04:32.357 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2139729 00:04:32.357 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:32.357 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:32.357 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2139729 00:04:32.357 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:32.357 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:32.357 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2139729' 00:04:32.357 killing process with pid 2139729 00:04:32.357 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2139729 00:04:32.357 12:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2139729 00:04:32.617 00:04:32.617 real 0m2.879s 00:04:32.617 user 0m3.018s 00:04:32.617 sys 0m0.955s 00:04:32.617 12:47:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:32.617 12:47:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.617 ************************************ 00:04:32.617 END TEST locking_app_on_unlocked_coremask 00:04:32.617 ************************************ 00:04:32.617 12:47:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:32.617 12:47:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:32.617 12:47:30 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:32.617 12:47:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.617 ************************************ 00:04:32.617 START TEST locking_app_on_locked_coremask 00:04:32.617 ************************************ 00:04:32.617 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:04:32.617 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2140223 00:04:32.617 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2140223 /var/tmp/spdk.sock 00:04:32.617 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.617 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2140223 ']' 00:04:32.617 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.618 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:32.618 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.618 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:32.618 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.618 [2024-11-18 12:47:30.181906] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:32.618 [2024-11-18 12:47:30.181949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140223 ] 00:04:32.618 [2024-11-18 12:47:30.256384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.618 [2024-11-18 12:47:30.297118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2140226 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2140226 /var/tmp/spdk2.sock 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2140226 /var/tmp/spdk2.sock 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2140226 /var/tmp/spdk2.sock 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2140226 ']' 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:32.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:32.878 12:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.878 [2024-11-18 12:47:30.574612] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:32.878 [2024-11-18 12:47:30.574654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140226 ] 00:04:33.138 [2024-11-18 12:47:30.666751] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2140223 has claimed it. 00:04:33.138 [2024-11-18 12:47:30.666794] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:33.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2140226) - No such process 00:04:33.708 ERROR: process (pid: 2140226) is no longer running 00:04:33.708 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:33.708 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:33.708 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:33.708 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:33.708 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:33.708 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:33.708 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2140223 00:04:33.708 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2140223 00:04:33.708 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:34.278 lslocks: write error 00:04:34.278 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2140223 00:04:34.278 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2140223 ']' 00:04:34.278 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2140223 00:04:34.278 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:34.278 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:34.278 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2140223 00:04:34.278 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:34.278 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:34.278 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2140223' 00:04:34.278 killing process with pid 2140223 00:04:34.278 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2140223 00:04:34.278 12:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2140223 00:04:34.538 00:04:34.538 real 0m1.999s 00:04:34.538 user 0m2.138s 00:04:34.538 sys 0m0.676s 00:04:34.538 12:47:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:34.538 12:47:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.538 ************************************ 00:04:34.538 END TEST locking_app_on_locked_coremask 00:04:34.538 ************************************ 00:04:34.538 12:47:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:34.538 12:47:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:34.538 12:47:32 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:34.538 12:47:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.538 ************************************ 00:04:34.538 START TEST locking_overlapped_coremask 00:04:34.538 ************************************ 00:04:34.538 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:04:34.538 12:47:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2140500 00:04:34.538 12:47:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2140500 /var/tmp/spdk.sock 00:04:34.538 12:47:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:34.538 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2140500 ']' 00:04:34.538 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.538 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:34.538 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.538 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:34.538 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.798 [2024-11-18 12:47:32.251770] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:34.798 [2024-11-18 12:47:32.251818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140500 ] 00:04:34.798 [2024-11-18 12:47:32.308868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:34.798 [2024-11-18 12:47:32.352264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.798 [2024-11-18 12:47:32.352411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.798 [2024-11-18 12:47:32.352412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2140711 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2140711 /var/tmp/spdk2.sock 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2140711 /var/tmp/spdk2.sock 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2140711 /var/tmp/spdk2.sock 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2140711 ']' 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:35.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:35.058 12:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.058 [2024-11-18 12:47:32.613634] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:35.058 [2024-11-18 12:47:32.613682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140711 ] 00:04:35.058 [2024-11-18 12:47:32.705389] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2140500 has claimed it. 00:04:35.058 [2024-11-18 12:47:32.705432] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:35.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2140711) - No such process 00:04:35.627 ERROR: process (pid: 2140711) is no longer running 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2140500 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 2140500 ']' 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 2140500 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2140500 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2140500' 00:04:35.628 killing process with pid 2140500 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 2140500 00:04:35.628 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 2140500 00:04:36.198 00:04:36.198 real 0m1.423s 00:04:36.198 user 0m3.967s 00:04:36.198 sys 0m0.377s 00:04:36.198 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:36.198 12:47:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.198 ************************************ 00:04:36.198 END TEST locking_overlapped_coremask 00:04:36.198 ************************************ 00:04:36.198 12:47:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:36.198 12:47:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:36.198 12:47:33 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:36.198 12:47:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.198 ************************************ 00:04:36.198 START TEST locking_overlapped_coremask_via_rpc 00:04:36.198 ************************************ 00:04:36.198 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:04:36.198 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2140803 00:04:36.198 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2140803 /var/tmp/spdk.sock 00:04:36.198 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:36.198 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2140803 ']' 00:04:36.198 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.198 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:36.198 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.198 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:36.198 12:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.198 [2024-11-18 12:47:33.738636] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:36.198 [2024-11-18 12:47:33.738679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140803 ] 00:04:36.198 [2024-11-18 12:47:33.815852] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:36.198 [2024-11-18 12:47:33.815878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:36.198 [2024-11-18 12:47:33.860883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.198 [2024-11-18 12:47:33.860902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:36.198 [2024-11-18 12:47:33.860905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.138 12:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:37.138 12:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:37.138 12:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2140993 00:04:37.138 12:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2140993 /var/tmp/spdk2.sock 00:04:37.138 12:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:37.138 12:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2140993 ']' 00:04:37.138 12:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:37.138 12:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:37.138 12:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:37.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:37.138 12:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:37.138 12:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.138 [2024-11-18 12:47:34.633977] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:37.138 [2024-11-18 12:47:34.634026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140993 ] 00:04:37.138 [2024-11-18 12:47:34.727021] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:37.138 [2024-11-18 12:47:34.727046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:37.138 [2024-11-18 12:47:34.814947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:37.138 [2024-11-18 12:47:34.815060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:37.138 [2024-11-18 12:47:34.815061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.078 [2024-11-18 12:47:35.481430] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2140803 has claimed it. 00:04:38.078 request: 00:04:38.078 { 00:04:38.078 "method": "framework_enable_cpumask_locks", 00:04:38.078 "req_id": 1 00:04:38.078 } 00:04:38.078 Got JSON-RPC error response 00:04:38.078 response: 00:04:38.078 { 00:04:38.078 "code": -32603, 00:04:38.078 "message": "Failed to claim CPU core: 2" 00:04:38.078 } 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2140803 /var/tmp/spdk.sock 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2140803 ']' 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2140993 /var/tmp/spdk2.sock 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2140993 ']' 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:38.078 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.338 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:38.338 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:38.338 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:38.338 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:38.338 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:38.338 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:38.338 00:04:38.338 real 0m2.200s 00:04:38.338 user 0m0.940s 00:04:38.338 sys 0m0.189s 00:04:38.338 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:38.338 12:47:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.338 ************************************ 00:04:38.338 END TEST locking_overlapped_coremask_via_rpc 00:04:38.338 ************************************ 00:04:38.338 12:47:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:38.338 12:47:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2140803 ]] 00:04:38.338 12:47:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2140803 00:04:38.338 12:47:35 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2140803 ']' 00:04:38.338 12:47:35 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2140803 00:04:38.338 12:47:35 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:04:38.338 12:47:35 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:38.338 12:47:35 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2140803 00:04:38.338 12:47:35 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:38.338 12:47:35 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:38.338 12:47:35 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2140803' 00:04:38.338 killing process with pid 2140803 00:04:38.338 12:47:35 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2140803 00:04:38.338 12:47:35 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2140803 00:04:38.599 12:47:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2140993 ]] 00:04:38.599 12:47:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2140993 00:04:38.599 12:47:36 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2140993 ']' 00:04:38.599 12:47:36 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2140993 00:04:38.599 12:47:36 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:04:38.599 12:47:36 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:38.599 12:47:36 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2140993 00:04:38.859 12:47:36 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:38.859 12:47:36 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:38.859 12:47:36 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2140993' 00:04:38.859 killing process with pid 2140993 00:04:38.859 12:47:36 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2140993 00:04:38.859 12:47:36 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2140993 00:04:39.120 12:47:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:39.120 12:47:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:39.120 12:47:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2140803 ]] 00:04:39.120 12:47:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2140803 00:04:39.120 12:47:36 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2140803 ']' 00:04:39.120 12:47:36 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2140803 00:04:39.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2140803) - No such process 00:04:39.120 12:47:36 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2140803 is not found' 00:04:39.120 Process with pid 2140803 is not found 00:04:39.120 12:47:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2140993 ]] 00:04:39.120 12:47:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2140993 00:04:39.120 12:47:36 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2140993 ']' 00:04:39.120 12:47:36 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2140993 00:04:39.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2140993) - No such process 00:04:39.120 12:47:36 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2140993 is not found' 00:04:39.120 Process with pid 2140993 is not found 00:04:39.120 12:47:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:39.120 00:04:39.120 real 0m14.844s 00:04:39.120 user 0m26.256s 00:04:39.120 sys 0m5.052s 00:04:39.120 12:47:36 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:39.120 12:47:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.120 ************************************ 00:04:39.120 END TEST cpu_locks 00:04:39.120 ************************************ 00:04:39.120 00:04:39.120 real 0m39.634s 00:04:39.120 user 1m15.672s 00:04:39.120 sys 0m8.639s 00:04:39.120 12:47:36 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:39.120 12:47:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.120 ************************************ 00:04:39.120 END TEST event 00:04:39.120 ************************************ 00:04:39.120 12:47:36 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:39.120 12:47:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:39.120 12:47:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.120 12:47:36 -- common/autotest_common.sh@10 -- # set +x 00:04:39.120 ************************************ 00:04:39.120 START TEST thread 00:04:39.120 ************************************ 00:04:39.120 12:47:36 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:39.380 * Looking for test storage... 00:04:39.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:39.380 12:47:36 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.380 12:47:36 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.380 12:47:36 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.380 12:47:36 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.380 12:47:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.380 12:47:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.380 12:47:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.380 12:47:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.380 12:47:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.380 12:47:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.380 12:47:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.380 12:47:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.380 12:47:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.380 12:47:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.380 12:47:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.381 12:47:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:39.381 12:47:36 thread -- scripts/common.sh@345 -- # : 1 00:04:39.381 12:47:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.381 12:47:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.381 12:47:36 thread -- scripts/common.sh@365 -- # decimal 1 00:04:39.381 12:47:36 thread -- scripts/common.sh@353 -- # local d=1 00:04:39.381 12:47:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.381 12:47:36 thread -- scripts/common.sh@355 -- # echo 1 00:04:39.381 12:47:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.381 12:47:36 thread -- scripts/common.sh@366 -- # decimal 2 00:04:39.381 12:47:36 thread -- scripts/common.sh@353 -- # local d=2 00:04:39.381 12:47:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.381 12:47:36 thread -- scripts/common.sh@355 -- # echo 2 00:04:39.381 12:47:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.381 12:47:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.381 12:47:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.381 12:47:36 thread -- scripts/common.sh@368 -- # return 0 00:04:39.381 12:47:36 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.381 12:47:36 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.381 --rc genhtml_branch_coverage=1 00:04:39.381 --rc genhtml_function_coverage=1 00:04:39.381 --rc genhtml_legend=1 00:04:39.381 --rc geninfo_all_blocks=1 00:04:39.381 --rc geninfo_unexecuted_blocks=1 00:04:39.381 00:04:39.381 ' 00:04:39.381 12:47:36 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.381 --rc genhtml_branch_coverage=1 00:04:39.381 --rc genhtml_function_coverage=1 00:04:39.381 --rc genhtml_legend=1 00:04:39.381 --rc geninfo_all_blocks=1 00:04:39.381 --rc geninfo_unexecuted_blocks=1 00:04:39.381 00:04:39.381 ' 00:04:39.381 12:47:36 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.381 --rc genhtml_branch_coverage=1 00:04:39.381 --rc genhtml_function_coverage=1 00:04:39.381 --rc genhtml_legend=1 00:04:39.381 --rc geninfo_all_blocks=1 00:04:39.381 --rc geninfo_unexecuted_blocks=1 00:04:39.381 00:04:39.381 ' 00:04:39.381 12:47:36 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.381 --rc genhtml_branch_coverage=1 00:04:39.381 --rc genhtml_function_coverage=1 00:04:39.381 --rc genhtml_legend=1 00:04:39.381 --rc geninfo_all_blocks=1 00:04:39.381 --rc geninfo_unexecuted_blocks=1 00:04:39.381 00:04:39.381 ' 00:04:39.381 12:47:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:39.381 12:47:36 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:04:39.381 12:47:36 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.381 12:47:36 thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.381 ************************************ 00:04:39.381 START TEST thread_poller_perf 00:04:39.381 ************************************ 00:04:39.381 12:47:36 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:39.381 [2024-11-18 12:47:36.977891] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:39.381 [2024-11-18 12:47:36.977954] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141564 ] 00:04:39.381 [2024-11-18 12:47:37.056250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.641 [2024-11-18 12:47:37.098376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.641 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:40.580 [2024-11-18T11:47:38.282Z] ====================================== 00:04:40.580 [2024-11-18T11:47:38.282Z] busy:2307195538 (cyc) 00:04:40.580 [2024-11-18T11:47:38.282Z] total_run_count: 406000 00:04:40.580 [2024-11-18T11:47:38.282Z] tsc_hz: 2300000000 (cyc) 00:04:40.580 [2024-11-18T11:47:38.282Z] ====================================== 00:04:40.580 [2024-11-18T11:47:38.282Z] poller_cost: 5682 (cyc), 2470 (nsec) 00:04:40.580 00:04:40.580 real 0m1.188s 00:04:40.580 user 0m1.109s 00:04:40.580 sys 0m0.075s 00:04:40.580 12:47:38 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:40.580 12:47:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:40.580 ************************************ 00:04:40.580 END TEST thread_poller_perf 00:04:40.580 ************************************ 00:04:40.580 12:47:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:40.580 12:47:38 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:04:40.580 12:47:38 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.580 12:47:38 thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.580 ************************************ 00:04:40.580 START TEST thread_poller_perf 00:04:40.580 ************************************ 00:04:40.580 12:47:38 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:40.580 [2024-11-18 12:47:38.234681] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:40.580 [2024-11-18 12:47:38.234748] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141821 ] 00:04:40.840 [2024-11-18 12:47:38.314374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.840 [2024-11-18 12:47:38.355196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.840 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:41.779 [2024-11-18T11:47:39.481Z] ====================================== 00:04:41.779 [2024-11-18T11:47:39.481Z] busy:2301753182 (cyc) 00:04:41.779 [2024-11-18T11:47:39.481Z] total_run_count: 5393000 00:04:41.779 [2024-11-18T11:47:39.481Z] tsc_hz: 2300000000 (cyc) 00:04:41.779 [2024-11-18T11:47:39.481Z] ====================================== 00:04:41.779 [2024-11-18T11:47:39.481Z] poller_cost: 426 (cyc), 185 (nsec) 00:04:41.779 00:04:41.779 real 0m1.183s 00:04:41.779 user 0m1.107s 00:04:41.779 sys 0m0.072s 00:04:41.779 12:47:39 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.779 12:47:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:41.779 ************************************ 00:04:41.779 END TEST thread_poller_perf 00:04:41.779 ************************************ 00:04:41.779 12:47:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:41.779 00:04:41.779 real 0m2.679s 00:04:41.779 user 0m2.366s 00:04:41.779 sys 0m0.329s 00:04:41.779 12:47:39 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.779 12:47:39 thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.779 ************************************ 00:04:41.779 END TEST thread 00:04:41.779 ************************************ 00:04:41.779 12:47:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:41.779 12:47:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:41.779 12:47:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.779 12:47:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.779 12:47:39 -- common/autotest_common.sh@10 -- # set +x 00:04:42.039 ************************************ 00:04:42.039 START TEST app_cmdline 00:04:42.039 ************************************ 00:04:42.039 12:47:39 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:42.039 * Looking for test storage... 00:04:42.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:42.039 12:47:39 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:42.039 12:47:39 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:04:42.039 12:47:39 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:42.039 12:47:39 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.039 12:47:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:42.040 12:47:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:42.040 12:47:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.040 12:47:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:42.040 12:47:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.040 12:47:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:42.040 12:47:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:42.040 12:47:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.040 12:47:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:42.040 12:47:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.040 12:47:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.040 12:47:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.040 12:47:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:42.040 12:47:39 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.040 12:47:39 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:42.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.040 --rc genhtml_branch_coverage=1 00:04:42.040 --rc genhtml_function_coverage=1 00:04:42.040 --rc genhtml_legend=1 00:04:42.040 --rc geninfo_all_blocks=1 00:04:42.040 --rc geninfo_unexecuted_blocks=1 00:04:42.040 00:04:42.040 ' 00:04:42.040 12:47:39 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:42.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.040 --rc genhtml_branch_coverage=1 00:04:42.040 --rc genhtml_function_coverage=1 00:04:42.040 --rc genhtml_legend=1 00:04:42.040 --rc geninfo_all_blocks=1 00:04:42.040 --rc geninfo_unexecuted_blocks=1 00:04:42.040 00:04:42.040 ' 00:04:42.040 12:47:39 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:42.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.040 --rc genhtml_branch_coverage=1 00:04:42.040 --rc genhtml_function_coverage=1 00:04:42.040 --rc genhtml_legend=1 00:04:42.040 --rc geninfo_all_blocks=1 00:04:42.040 --rc geninfo_unexecuted_blocks=1 00:04:42.040 00:04:42.040 ' 00:04:42.040 12:47:39 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:42.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.040 --rc genhtml_branch_coverage=1 00:04:42.040 --rc genhtml_function_coverage=1 00:04:42.040 --rc genhtml_legend=1 00:04:42.040 --rc geninfo_all_blocks=1 00:04:42.040 --rc geninfo_unexecuted_blocks=1 00:04:42.040 00:04:42.040 ' 00:04:42.040 12:47:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:42.040 12:47:39 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:42.040 12:47:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2142115 00:04:42.040 12:47:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2142115 00:04:42.040 12:47:39 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 2142115 ']' 00:04:42.040 12:47:39 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.040 12:47:39 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:42.040 12:47:39 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.040 12:47:39 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:42.040 12:47:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:42.040 [2024-11-18 12:47:39.725237] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:42.040 [2024-11-18 12:47:39.725283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142115 ] 00:04:42.300 [2024-11-18 12:47:39.801354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.300 [2024-11-18 12:47:39.844241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.560 12:47:40 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:42.560 12:47:40 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:04:42.560 12:47:40 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:42.560 { 00:04:42.560 "version": "SPDK v25.01-pre git sha1 403bf887a", 00:04:42.560 "fields": { 00:04:42.560 "major": 25, 00:04:42.560 "minor": 1, 00:04:42.560 "patch": 0, 00:04:42.560 "suffix": "-pre", 00:04:42.560 "commit": "403bf887a" 00:04:42.560 } 00:04:42.560 } 00:04:42.820 12:47:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:42.820 12:47:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:42.820 12:47:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:42.820 12:47:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:42.820 12:47:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:42.820 12:47:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.820 12:47:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.820 12:47:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:42.820 12:47:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:42.820 12:47:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:42.820 request: 00:04:42.820 { 00:04:42.820 "method": "env_dpdk_get_mem_stats", 00:04:42.820 "req_id": 1 00:04:42.820 } 00:04:42.820 Got JSON-RPC error response 00:04:42.820 response: 00:04:42.820 { 00:04:42.820 "code": -32601, 00:04:42.820 "message": "Method not found" 00:04:42.820 } 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:42.820 12:47:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2142115 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 2142115 ']' 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 2142115 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:04:42.820 12:47:40 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:43.081 12:47:40 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2142115 00:04:43.081 12:47:40 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:43.081 12:47:40 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:43.081 12:47:40 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2142115' 00:04:43.081 killing process with pid 2142115 00:04:43.081 12:47:40 app_cmdline -- common/autotest_common.sh@971 -- # kill 2142115 00:04:43.081 12:47:40 app_cmdline -- common/autotest_common.sh@976 -- # wait 2142115 00:04:43.343 00:04:43.343 real 0m1.362s 00:04:43.343 user 0m1.587s 00:04:43.343 sys 0m0.455s 00:04:43.343 12:47:40 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:43.343 12:47:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:43.343 ************************************ 00:04:43.343 END TEST app_cmdline 00:04:43.343 ************************************ 00:04:43.343 12:47:40 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:43.343 12:47:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:43.343 12:47:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:43.343 12:47:40 -- common/autotest_common.sh@10 -- # set +x 00:04:43.343 ************************************ 00:04:43.343 START TEST version 00:04:43.343 ************************************ 00:04:43.343 12:47:40 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:43.343 * Looking for test storage... 00:04:43.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:43.343 12:47:41 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:43.343 12:47:41 version -- common/autotest_common.sh@1691 -- # lcov --version 00:04:43.343 12:47:41 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:43.603 12:47:41 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:43.603 12:47:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.603 12:47:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.603 12:47:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.603 12:47:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.603 12:47:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.603 12:47:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.603 12:47:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.603 12:47:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.603 12:47:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.603 12:47:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.603 12:47:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.603 12:47:41 version -- scripts/common.sh@344 -- # case "$op" in 00:04:43.603 12:47:41 version -- scripts/common.sh@345 -- # : 1 00:04:43.603 12:47:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.603 12:47:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.603 12:47:41 version -- scripts/common.sh@365 -- # decimal 1 00:04:43.603 12:47:41 version -- scripts/common.sh@353 -- # local d=1 00:04:43.603 12:47:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.603 12:47:41 version -- scripts/common.sh@355 -- # echo 1 00:04:43.603 12:47:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.603 12:47:41 version -- scripts/common.sh@366 -- # decimal 2 00:04:43.603 12:47:41 version -- scripts/common.sh@353 -- # local d=2 00:04:43.603 12:47:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.603 12:47:41 version -- scripts/common.sh@355 -- # echo 2 00:04:43.603 12:47:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.603 12:47:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.603 12:47:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.603 12:47:41 version -- scripts/common.sh@368 -- # return 0 00:04:43.603 12:47:41 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.603 12:47:41 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:43.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.603 --rc genhtml_branch_coverage=1 00:04:43.603 --rc genhtml_function_coverage=1 00:04:43.603 --rc genhtml_legend=1 00:04:43.603 --rc geninfo_all_blocks=1 00:04:43.603 --rc geninfo_unexecuted_blocks=1 00:04:43.603 00:04:43.603 ' 00:04:43.603 12:47:41 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:43.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.603 --rc genhtml_branch_coverage=1 00:04:43.603 --rc genhtml_function_coverage=1 00:04:43.603 --rc genhtml_legend=1 00:04:43.603 --rc geninfo_all_blocks=1 00:04:43.603 --rc geninfo_unexecuted_blocks=1 00:04:43.603 00:04:43.603 ' 00:04:43.603 12:47:41 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:43.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.603 --rc genhtml_branch_coverage=1 00:04:43.603 --rc genhtml_function_coverage=1 00:04:43.603 --rc genhtml_legend=1 00:04:43.603 --rc geninfo_all_blocks=1 00:04:43.603 --rc geninfo_unexecuted_blocks=1 00:04:43.603 00:04:43.603 ' 00:04:43.603 12:47:41 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:43.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.603 --rc genhtml_branch_coverage=1 00:04:43.603 --rc genhtml_function_coverage=1 00:04:43.603 --rc genhtml_legend=1 00:04:43.603 --rc geninfo_all_blocks=1 00:04:43.603 --rc geninfo_unexecuted_blocks=1 00:04:43.603 00:04:43.603 ' 00:04:43.603 12:47:41 version -- app/version.sh@17 -- # get_header_version major 00:04:43.603 12:47:41 version -- app/version.sh@14 -- # cut -f2 00:04:43.603 12:47:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:43.603 12:47:41 version -- app/version.sh@14 -- # tr -d '"' 00:04:43.603 12:47:41 version -- app/version.sh@17 -- # major=25 00:04:43.603 12:47:41 version -- app/version.sh@18 -- # get_header_version minor 00:04:43.603 12:47:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:43.603 12:47:41 version -- app/version.sh@14 -- # cut -f2 00:04:43.603 12:47:41 version -- app/version.sh@14 -- # tr -d '"' 00:04:43.603 12:47:41 version -- app/version.sh@18 -- # minor=1 00:04:43.603 12:47:41 version -- app/version.sh@19 -- # get_header_version patch 00:04:43.603 12:47:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:43.603 12:47:41 version -- app/version.sh@14 -- # cut -f2 00:04:43.603 12:47:41 version -- app/version.sh@14 -- # tr -d '"' 00:04:43.604 12:47:41 version -- app/version.sh@19 -- # patch=0 00:04:43.604 12:47:41 version -- app/version.sh@20 -- # get_header_version suffix 00:04:43.604 12:47:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:43.604 12:47:41 version -- app/version.sh@14 -- # cut -f2 00:04:43.604 12:47:41 version -- app/version.sh@14 -- # tr -d '"' 00:04:43.604 12:47:41 version -- app/version.sh@20 -- # suffix=-pre 00:04:43.604 12:47:41 version -- app/version.sh@22 -- # version=25.1 00:04:43.604 12:47:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:43.604 12:47:41 version -- app/version.sh@28 -- # version=25.1rc0 00:04:43.604 12:47:41 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:43.604 12:47:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:43.604 12:47:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:43.604 12:47:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:43.604 00:04:43.604 real 0m0.246s 00:04:43.604 user 0m0.146s 00:04:43.604 sys 0m0.143s 00:04:43.604 12:47:41 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:43.604 12:47:41 version -- common/autotest_common.sh@10 -- # set +x 00:04:43.604 ************************************ 00:04:43.604 END TEST version 00:04:43.604 ************************************ 00:04:43.604 12:47:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:43.604 12:47:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:43.604 12:47:41 -- spdk/autotest.sh@194 -- # uname -s 00:04:43.604 12:47:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:43.604 12:47:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:43.604 12:47:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:43.604 12:47:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:43.604 12:47:41 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:04:43.604 12:47:41 -- spdk/autotest.sh@256 -- # timing_exit lib 00:04:43.604 12:47:41 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.604 12:47:41 -- common/autotest_common.sh@10 -- # set +x 00:04:43.604 12:47:41 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:04:43.604 12:47:41 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:04:43.604 12:47:41 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:04:43.604 12:47:41 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:04:43.604 12:47:41 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:04:43.604 12:47:41 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:04:43.604 12:47:41 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:43.604 12:47:41 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:43.604 12:47:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:43.604 12:47:41 -- common/autotest_common.sh@10 -- # set +x 00:04:43.604 ************************************ 00:04:43.604 START TEST nvmf_tcp 00:04:43.604 ************************************ 00:04:43.604 12:47:41 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:43.864 * Looking for test storage... 00:04:43.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:43.864 12:47:41 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:43.864 12:47:41 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:43.864 12:47:41 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:43.864 12:47:41 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.864 12:47:41 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:43.865 12:47:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:43.865 12:47:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.865 12:47:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:43.865 12:47:41 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.865 12:47:41 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:43.865 12:47:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:43.865 12:47:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.865 12:47:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:43.865 12:47:41 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.865 12:47:41 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.865 12:47:41 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.865 12:47:41 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:43.865 12:47:41 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.865 12:47:41 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:43.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.865 --rc genhtml_branch_coverage=1 00:04:43.865 --rc genhtml_function_coverage=1 00:04:43.865 --rc genhtml_legend=1 00:04:43.865 --rc geninfo_all_blocks=1 00:04:43.865 --rc geninfo_unexecuted_blocks=1 00:04:43.865 00:04:43.865 ' 00:04:43.865 12:47:41 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:43.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.865 --rc genhtml_branch_coverage=1 00:04:43.865 --rc genhtml_function_coverage=1 00:04:43.865 --rc genhtml_legend=1 00:04:43.865 --rc geninfo_all_blocks=1 00:04:43.865 --rc geninfo_unexecuted_blocks=1 00:04:43.865 00:04:43.865 ' 00:04:43.865 12:47:41 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:43.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.865 --rc genhtml_branch_coverage=1 00:04:43.865 --rc genhtml_function_coverage=1 00:04:43.865 --rc genhtml_legend=1 00:04:43.865 --rc geninfo_all_blocks=1 00:04:43.865 --rc geninfo_unexecuted_blocks=1 00:04:43.865 00:04:43.865 ' 00:04:43.865 12:47:41 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:43.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.865 --rc genhtml_branch_coverage=1 00:04:43.865 --rc genhtml_function_coverage=1 00:04:43.865 --rc genhtml_legend=1 00:04:43.865 --rc geninfo_all_blocks=1 00:04:43.865 --rc geninfo_unexecuted_blocks=1 00:04:43.865 00:04:43.865 ' 00:04:43.865 12:47:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:43.865 12:47:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:43.865 12:47:41 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:43.865 12:47:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:43.865 12:47:41 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:43.865 12:47:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.865 ************************************ 00:04:43.865 START TEST nvmf_target_core 00:04:43.865 ************************************ 00:04:43.865 12:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:44.126 * Looking for test storage... 00:04:44.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:44.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.126 --rc genhtml_branch_coverage=1 00:04:44.126 --rc genhtml_function_coverage=1 00:04:44.126 --rc genhtml_legend=1 00:04:44.126 --rc geninfo_all_blocks=1 00:04:44.126 --rc geninfo_unexecuted_blocks=1 00:04:44.126 00:04:44.126 ' 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:44.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.126 --rc genhtml_branch_coverage=1 00:04:44.126 --rc genhtml_function_coverage=1 00:04:44.126 --rc genhtml_legend=1 00:04:44.126 --rc geninfo_all_blocks=1 00:04:44.126 --rc geninfo_unexecuted_blocks=1 00:04:44.126 00:04:44.126 ' 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:44.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.126 --rc genhtml_branch_coverage=1 00:04:44.126 --rc genhtml_function_coverage=1 00:04:44.126 --rc genhtml_legend=1 00:04:44.126 --rc geninfo_all_blocks=1 00:04:44.126 --rc geninfo_unexecuted_blocks=1 00:04:44.126 00:04:44.126 ' 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:44.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.126 --rc genhtml_branch_coverage=1 00:04:44.126 --rc genhtml_function_coverage=1 00:04:44.126 --rc genhtml_legend=1 00:04:44.126 --rc geninfo_all_blocks=1 00:04:44.126 --rc geninfo_unexecuted_blocks=1 00:04:44.126 00:04:44.126 ' 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.126 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:44.127 ************************************ 00:04:44.127 START TEST nvmf_abort 00:04:44.127 ************************************ 00:04:44.127 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:44.388 * Looking for test storage... 00:04:44.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:44.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.388 --rc genhtml_branch_coverage=1 00:04:44.388 --rc genhtml_function_coverage=1 00:04:44.388 --rc genhtml_legend=1 00:04:44.388 --rc geninfo_all_blocks=1 00:04:44.388 --rc geninfo_unexecuted_blocks=1 00:04:44.388 00:04:44.388 ' 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:44.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.388 --rc genhtml_branch_coverage=1 00:04:44.388 --rc genhtml_function_coverage=1 00:04:44.388 --rc genhtml_legend=1 00:04:44.388 --rc geninfo_all_blocks=1 00:04:44.388 --rc geninfo_unexecuted_blocks=1 00:04:44.388 00:04:44.388 ' 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:44.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.388 --rc genhtml_branch_coverage=1 00:04:44.388 --rc genhtml_function_coverage=1 00:04:44.388 --rc genhtml_legend=1 00:04:44.388 --rc geninfo_all_blocks=1 00:04:44.388 --rc geninfo_unexecuted_blocks=1 00:04:44.388 00:04:44.388 ' 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:44.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.388 --rc genhtml_branch_coverage=1 00:04:44.388 --rc genhtml_function_coverage=1 00:04:44.388 --rc genhtml_legend=1 00:04:44.388 --rc geninfo_all_blocks=1 00:04:44.388 --rc geninfo_unexecuted_blocks=1 00:04:44.388 00:04:44.388 ' 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.388 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:44.389 12:47:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:04:50.972 Found 0000:86:00.0 (0x8086 - 0x159b) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:04:50.972 Found 0000:86:00.1 (0x8086 - 0x159b) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:04:50.972 Found net devices under 0000:86:00.0: cvl_0_0 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:04:50.972 Found net devices under 0000:86:00.1: cvl_0_1 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:50.972 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:50.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:50.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:04:50.973 00:04:50.973 --- 10.0.0.2 ping statistics --- 00:04:50.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:50.973 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:50.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:50.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:04:50.973 00:04:50.973 --- 10.0.0.1 ping statistics --- 00:04:50.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:50.973 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:50.973 12:47:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2145798 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2145798 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2145798 ']' 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.973 [2024-11-18 12:47:48.081450] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:50.973 [2024-11-18 12:47:48.081499] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:50.973 [2024-11-18 12:47:48.161565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:50.973 [2024-11-18 12:47:48.202515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:50.973 [2024-11-18 12:47:48.202556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:50.973 [2024-11-18 12:47:48.202563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:50.973 [2024-11-18 12:47:48.202569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:50.973 [2024-11-18 12:47:48.202575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:50.973 [2024-11-18 12:47:48.203895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.973 [2024-11-18 12:47:48.204000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.973 [2024-11-18 12:47:48.204001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.973 [2024-11-18 12:47:48.352314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.973 Malloc0 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.973 Delay0 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.973 [2024-11-18 12:47:48.436226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.973 12:47:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:50.973 [2024-11-18 12:47:48.512128] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:52.880 Initializing NVMe Controllers 00:04:52.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:52.880 controller IO queue size 128 less than required 00:04:52.880 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:52.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:52.880 Initialization complete. Launching workers. 00:04:52.880 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 36384 00:04:52.880 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36448, failed to submit 62 00:04:52.880 success 36388, unsuccessful 60, failed 0 00:04:52.880 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:52.880 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.880 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:52.880 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.880 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:52.880 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:52.881 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:52.881 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:52.881 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:52.881 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:52.881 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:52.881 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:52.881 rmmod nvme_tcp 00:04:52.881 rmmod nvme_fabrics 00:04:53.141 rmmod nvme_keyring 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2145798 ']' 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2145798 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2145798 ']' 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2145798 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2145798 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2145798' 00:04:53.141 killing process with pid 2145798 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2145798 00:04:53.141 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2145798 00:04:53.401 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:53.401 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:53.401 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:53.401 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:53.401 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:04:53.401 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:53.401 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:04:53.401 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:53.401 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:53.401 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:53.401 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:53.401 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:55.313 12:47:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:55.313 00:04:55.313 real 0m11.183s 00:04:55.313 user 0m11.254s 00:04:55.313 sys 0m5.470s 00:04:55.313 12:47:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.313 12:47:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:55.313 ************************************ 00:04:55.313 END TEST nvmf_abort 00:04:55.313 ************************************ 00:04:55.313 12:47:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:55.313 12:47:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:55.313 12:47:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.313 12:47:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:55.313 ************************************ 00:04:55.313 START TEST nvmf_ns_hotplug_stress 00:04:55.313 ************************************ 00:04:55.313 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:55.574 * Looking for test storage... 00:04:55.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.574 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:55.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.575 --rc genhtml_branch_coverage=1 00:04:55.575 --rc genhtml_function_coverage=1 00:04:55.575 --rc genhtml_legend=1 00:04:55.575 --rc geninfo_all_blocks=1 00:04:55.575 --rc geninfo_unexecuted_blocks=1 00:04:55.575 00:04:55.575 ' 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:55.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.575 --rc genhtml_branch_coverage=1 00:04:55.575 --rc genhtml_function_coverage=1 00:04:55.575 --rc genhtml_legend=1 00:04:55.575 --rc geninfo_all_blocks=1 00:04:55.575 --rc geninfo_unexecuted_blocks=1 00:04:55.575 00:04:55.575 ' 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:55.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.575 --rc genhtml_branch_coverage=1 00:04:55.575 --rc genhtml_function_coverage=1 00:04:55.575 --rc genhtml_legend=1 00:04:55.575 --rc geninfo_all_blocks=1 00:04:55.575 --rc geninfo_unexecuted_blocks=1 00:04:55.575 00:04:55.575 ' 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:55.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.575 --rc genhtml_branch_coverage=1 00:04:55.575 --rc genhtml_function_coverage=1 00:04:55.575 --rc genhtml_legend=1 00:04:55.575 --rc geninfo_all_blocks=1 00:04:55.575 --rc geninfo_unexecuted_blocks=1 00:04:55.575 00:04:55.575 ' 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:55.575 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:55.576 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:55.576 12:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:02.161 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:02.161 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:02.161 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:02.162 Found net devices under 0000:86:00.0: cvl_0_0 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:02.162 Found net devices under 0000:86:00.1: cvl_0_1 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:02.162 12:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:02.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:02.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:05:02.162 00:05:02.162 --- 10.0.0.2 ping statistics --- 00:05:02.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:02.162 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:02.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:02.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:05:02.162 00:05:02.162 --- 10.0.0.1 ping statistics --- 00:05:02.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:02.162 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2149811 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2149811 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2149811 ']' 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:02.162 [2024-11-18 12:47:59.283830] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:02.162 [2024-11-18 12:47:59.283880] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:02.162 [2024-11-18 12:47:59.361290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.162 [2024-11-18 12:47:59.403818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:02.162 [2024-11-18 12:47:59.403856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:02.162 [2024-11-18 12:47:59.403863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:02.162 [2024-11-18 12:47:59.403870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:02.162 [2024-11-18 12:47:59.403874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:02.162 [2024-11-18 12:47:59.405321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.162 [2024-11-18 12:47:59.405430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.162 [2024-11-18 12:47:59.405430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:02.162 [2024-11-18 12:47:59.714519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:02.162 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:02.421 12:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:02.421 [2024-11-18 12:48:00.107962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:02.680 12:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:02.680 12:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:02.940 Malloc0 00:05:02.940 12:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:03.199 Delay0 00:05:03.199 12:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:03.458 12:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:03.717 NULL1 00:05:03.717 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:03.717 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:03.717 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2150195 00:05:03.718 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:03.718 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:03.977 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:04.237 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:04.237 12:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:04.496 true 00:05:04.496 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:04.496 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:04.755 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:04.755 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:04.755 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:05.014 true 00:05:05.014 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:05.014 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:05.952 Read completed with error (sct=0, sc=11) 00:05:06.211 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:06.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:06.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:06.211 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:06.211 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:06.471 true 00:05:06.471 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:06.471 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.731 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.990 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:06.990 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:06.990 true 00:05:06.990 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:07.251 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.189 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.449 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:08.449 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:08.708 true 00:05:08.708 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:08.708 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.645 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.645 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:09.645 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:09.904 true 00:05:09.904 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:09.904 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.162 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.422 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:10.422 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:10.422 true 00:05:10.422 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:10.422 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.800 12:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.800 12:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:11.800 12:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:11.800 true 00:05:12.060 12:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:12.060 12:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.060 12:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.320 12:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:12.320 12:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:12.580 true 00:05:12.580 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:12.580 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.520 12:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.780 12:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:13.780 12:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:14.039 true 00:05:14.039 12:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:14.039 12:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.299 12:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.299 12:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:14.299 12:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:14.559 true 00:05:14.559 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:14.559 12:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.758 12:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.758 12:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:15.758 12:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:16.018 true 00:05:16.018 12:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:16.018 12:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.957 12:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.217 12:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:17.217 12:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:17.217 true 00:05:17.217 12:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:17.217 12:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.477 12:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.736 12:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:17.736 12:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:17.736 true 00:05:17.996 12:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:17.996 12:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.935 12:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.195 12:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:19.195 12:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:19.195 true 00:05:19.455 12:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:19.455 12:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.024 12:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.284 12:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:20.284 12:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:20.544 true 00:05:20.544 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:20.544 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.803 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.063 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:21.063 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:21.063 true 00:05:21.063 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:21.063 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.448 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.449 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:22.449 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:22.708 true 00:05:22.708 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:22.708 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.648 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.648 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:23.648 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:23.907 true 00:05:23.907 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:23.907 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.166 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.426 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:24.426 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:24.686 true 00:05:24.686 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:24.686 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.624 12:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.885 12:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:25.885 12:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:26.144 true 00:05:26.144 12:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:26.144 12:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.083 12:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.083 12:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:27.083 12:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:27.342 true 00:05:27.342 12:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:27.342 12:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.601 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.601 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:27.601 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:27.859 true 00:05:27.859 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:27.859 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.237 12:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.237 12:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:29.237 12:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:29.497 true 00:05:29.497 12:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:29.497 12:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.436 12:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.436 12:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:30.436 12:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:30.696 true 00:05:30.696 12:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:30.696 12:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.956 12:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.956 12:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:30.956 12:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:31.216 true 00:05:31.216 12:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:31.216 12:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.596 12:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.596 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:32.596 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:32.857 true 00:05:32.857 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:32.857 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.797 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.797 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:33.797 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:34.057 true 00:05:34.057 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:34.057 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.057 Initializing NVMe Controllers 00:05:34.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:34.057 Controller IO queue size 128, less than required. 00:05:34.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:34.057 Controller IO queue size 128, less than required. 00:05:34.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:34.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:34.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:34.057 Initialization complete. Launching workers. 00:05:34.057 ======================================================== 00:05:34.057 Latency(us) 00:05:34.057 Device Information : IOPS MiB/s Average min max 00:05:34.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1850.47 0.90 45196.55 2590.94 1028273.31 00:05:34.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16649.33 8.13 7668.29 2268.88 306115.90 00:05:34.057 ======================================================== 00:05:34.057 Total : 18499.80 9.03 11422.10 2268.88 1028273.31 00:05:34.057 00:05:34.317 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.317 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:34.317 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:34.577 true 00:05:34.577 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150195 00:05:34.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2150195) - No such process 00:05:34.577 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2150195 00:05:34.577 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.836 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:35.096 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:35.096 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:35.096 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:35.096 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:35.096 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:35.356 null0 00:05:35.356 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:35.356 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:35.356 12:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:35.356 null1 00:05:35.356 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:35.356 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:35.356 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:35.616 null2 00:05:35.616 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:35.616 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:35.616 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:35.876 null3 00:05:35.876 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:35.876 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:35.877 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:36.137 null4 00:05:36.137 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:36.137 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:36.137 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:36.137 null5 00:05:36.397 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:36.397 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:36.397 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:36.397 null6 00:05:36.397 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:36.397 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:36.397 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:36.657 null7 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:36.657 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2156217 2156218 2156220 2156222 2156224 2156226 2156228 2156229 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.658 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:36.917 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.917 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:36.917 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:36.917 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:36.917 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:36.917 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:36.917 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:36.917 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:37.176 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.176 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.176 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:37.176 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.176 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.176 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.176 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.177 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:37.440 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:37.440 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:37.440 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:37.440 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.440 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:37.440 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:37.440 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:37.440 12:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.440 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:37.701 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:37.701 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:37.701 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:37.701 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.701 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:37.701 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:37.701 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:37.701 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.960 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:37.961 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.961 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.961 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:37.961 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.961 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.961 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:38.220 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:38.220 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:38.220 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:38.220 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.220 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:38.220 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:38.220 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:38.220 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.480 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:38.480 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:38.480 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.740 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:39.000 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:39.000 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:39.000 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:39.000 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:39.000 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.000 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:39.000 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:39.000 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.260 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:39.261 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:39.261 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.261 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.261 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:39.261 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.261 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.261 12:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:39.522 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:39.522 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:39.522 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:39.522 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:39.522 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.522 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:39.522 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:39.522 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:39.782 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.041 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.041 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.041 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.041 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.041 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.041 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.041 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.041 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.041 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.041 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.041 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.042 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.042 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.042 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.042 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.042 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.042 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.042 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.042 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.042 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.042 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.042 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.042 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.042 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.301 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.301 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.301 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.301 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.301 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.301 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.301 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.302 12:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.562 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.822 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.822 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.823 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:41.083 rmmod nvme_tcp 00:05:41.083 rmmod nvme_fabrics 00:05:41.083 rmmod nvme_keyring 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2149811 ']' 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2149811 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2149811 ']' 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2149811 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2149811 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2149811' 00:05:41.083 killing process with pid 2149811 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2149811 00:05:41.083 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2149811 00:05:41.344 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:41.344 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:41.344 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:41.344 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:41.344 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:41.344 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:41.344 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:41.344 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:41.344 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:41.344 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:41.344 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:41.344 12:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:43.255 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:43.256 00:05:43.256 real 0m47.881s 00:05:43.256 user 3m14.995s 00:05:43.256 sys 0m15.534s 00:05:43.256 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.256 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:43.256 ************************************ 00:05:43.256 END TEST nvmf_ns_hotplug_stress 00:05:43.256 ************************************ 00:05:43.256 12:48:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:43.256 12:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:43.256 12:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.256 12:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:43.516 ************************************ 00:05:43.516 START TEST nvmf_delete_subsystem 00:05:43.516 ************************************ 00:05:43.516 12:48:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:43.516 * Looking for test storage... 00:05:43.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.516 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:43.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.516 --rc genhtml_branch_coverage=1 00:05:43.516 --rc genhtml_function_coverage=1 00:05:43.516 --rc genhtml_legend=1 00:05:43.516 --rc geninfo_all_blocks=1 00:05:43.516 --rc geninfo_unexecuted_blocks=1 00:05:43.516 00:05:43.516 ' 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:43.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.517 --rc genhtml_branch_coverage=1 00:05:43.517 --rc genhtml_function_coverage=1 00:05:43.517 --rc genhtml_legend=1 00:05:43.517 --rc geninfo_all_blocks=1 00:05:43.517 --rc geninfo_unexecuted_blocks=1 00:05:43.517 00:05:43.517 ' 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:43.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.517 --rc genhtml_branch_coverage=1 00:05:43.517 --rc genhtml_function_coverage=1 00:05:43.517 --rc genhtml_legend=1 00:05:43.517 --rc geninfo_all_blocks=1 00:05:43.517 --rc geninfo_unexecuted_blocks=1 00:05:43.517 00:05:43.517 ' 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:43.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.517 --rc genhtml_branch_coverage=1 00:05:43.517 --rc genhtml_function_coverage=1 00:05:43.517 --rc genhtml_legend=1 00:05:43.517 --rc geninfo_all_blocks=1 00:05:43.517 --rc geninfo_unexecuted_blocks=1 00:05:43.517 00:05:43.517 ' 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:43.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:43.517 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:50.096 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:50.096 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:50.096 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:50.097 Found net devices under 0000:86:00.0: cvl_0_0 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:50.097 Found net devices under 0000:86:00.1: cvl_0_1 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:50.097 12:48:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:50.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:50.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:05:50.097 00:05:50.097 --- 10.0.0.2 ping statistics --- 00:05:50.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.097 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:50.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:50.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:05:50.097 00:05:50.097 --- 10.0.0.1 ping statistics --- 00:05:50.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.097 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2160614 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2160614 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2160614 ']' 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.097 [2024-11-18 12:48:47.289403] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:50.097 [2024-11-18 12:48:47.289454] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:50.097 [2024-11-18 12:48:47.369280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.097 [2024-11-18 12:48:47.411002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:50.097 [2024-11-18 12:48:47.411038] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:50.097 [2024-11-18 12:48:47.411045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:50.097 [2024-11-18 12:48:47.411050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:50.097 [2024-11-18 12:48:47.411055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:50.097 [2024-11-18 12:48:47.412265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.097 [2024-11-18 12:48:47.412265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.097 [2024-11-18 12:48:47.560595] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.097 [2024-11-18 12:48:47.580826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:50.097 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.098 NULL1 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.098 Delay0 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2160787 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:50.098 12:48:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:50.098 [2024-11-18 12:48:47.692642] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:52.005 12:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:52.005 12:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.005 12:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Write completed with error (sct=0, sc=8) 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.265 starting I/O failed: -6 00:05:52.265 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 starting I/O failed: -6 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 [2024-11-18 12:48:49.898010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197f2c0 is same with the state(6) to be set 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 starting I/O failed: -6 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 starting I/O failed: -6 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 starting I/O failed: -6 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 starting I/O failed: -6 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 starting I/O failed: -6 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 starting I/O failed: -6 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 starting I/O failed: -6 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 starting I/O failed: -6 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 starting I/O failed: -6 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 starting I/O failed: -6 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 [2024-11-18 12:48:49.901962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f981c000c40 is same with the state(6) to be set 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Read completed with error (sct=0, sc=8) 00:05:52.266 Write completed with error (sct=0, sc=8) 00:05:53.205 [2024-11-18 12:48:50.870189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19809a0 is same with the state(6) to be set 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 [2024-11-18 12:48:50.901440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197f860 is same with the state(6) to be set 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Write completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.205 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 [2024-11-18 12:48:50.902313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197f4a0 is same with the state(6) to be set 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 [2024-11-18 12:48:50.904300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f981c00d020 is same with the state(6) to be set 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Write completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.206 Read completed with error (sct=0, sc=8) 00:05:53.466 [2024-11-18 12:48:50.904917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f981c00d800 is same with the state(6) to be set 00:05:53.466 Initializing NVMe Controllers 00:05:53.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:53.466 Controller IO queue size 128, less than required. 00:05:53.466 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:53.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:53.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:53.466 Initialization complete. Launching workers. 00:05:53.466 ======================================================== 00:05:53.466 Latency(us) 00:05:53.466 Device Information : IOPS MiB/s Average min max 00:05:53.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 184.70 0.09 903447.70 344.45 1043526.83 00:05:53.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.29 0.08 907280.16 266.91 1010106.58 00:05:53.466 ======================================================== 00:05:53.466 Total : 348.99 0.17 905251.86 266.91 1043526.83 00:05:53.466 00:05:53.466 [2024-11-18 12:48:50.905488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19809a0 (9): Bad file descriptor 00:05:53.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:53.466 12:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.466 12:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:53.466 12:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2160787 00:05:53.466 12:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:53.725 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:53.725 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2160787 00:05:53.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2160787) - No such process 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2160787 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2160787 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2160787 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.726 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.985 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.986 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:53.986 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.986 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.986 [2024-11-18 12:48:51.434204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:53.986 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.986 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.986 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.986 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.986 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.986 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2161331 00:05:53.986 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:53.986 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:53.986 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2161331 00:05:53.986 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:53.986 [2024-11-18 12:48:51.524925] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:54.557 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:54.557 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2161331 00:05:54.557 12:48:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:54.817 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:54.817 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2161331 00:05:54.817 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:55.386 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:55.386 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2161331 00:05:55.386 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:55.956 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:55.956 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2161331 00:05:55.956 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:56.526 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:56.526 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2161331 00:05:56.526 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:56.786 12:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:56.786 12:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2161331 00:05:56.786 12:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:57.045 Initializing NVMe Controllers 00:05:57.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:57.045 Controller IO queue size 128, less than required. 00:05:57.045 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:57.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:57.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:57.045 Initialization complete. Launching workers. 00:05:57.045 ======================================================== 00:05:57.045 Latency(us) 00:05:57.045 Device Information : IOPS MiB/s Average min max 00:05:57.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003387.42 1000133.50 1041586.69 00:05:57.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004939.53 1000178.05 1012617.87 00:05:57.045 ======================================================== 00:05:57.045 Total : 256.00 0.12 1004163.47 1000133.50 1041586.69 00:05:57.045 00:05:57.306 12:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:57.306 12:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2161331 00:05:57.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2161331) - No such process 00:05:57.306 12:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2161331 00:05:57.306 12:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:57.306 12:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:05:57.306 12:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:57.306 12:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:05:57.306 12:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:57.306 12:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:05:57.306 12:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:57.306 12:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:57.306 rmmod nvme_tcp 00:05:57.566 rmmod nvme_fabrics 00:05:57.566 rmmod nvme_keyring 00:05:57.566 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:57.566 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:05:57.566 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:05:57.566 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2160614 ']' 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2160614 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2160614 ']' 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2160614 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2160614 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2160614' 00:05:57.567 killing process with pid 2160614 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2160614 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2160614 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:57.567 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:05:57.827 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:57.827 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:57.827 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.827 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:57.827 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.736 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:59.736 00:05:59.736 real 0m16.369s 00:05:59.736 user 0m29.448s 00:05:59.736 sys 0m5.543s 00:05:59.736 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.736 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:59.736 ************************************ 00:05:59.736 END TEST nvmf_delete_subsystem 00:05:59.736 ************************************ 00:05:59.736 12:48:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:59.736 12:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:59.736 12:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:59.736 12:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:59.736 ************************************ 00:05:59.736 START TEST nvmf_host_management 00:05:59.736 ************************************ 00:05:59.736 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:59.997 * Looking for test storage... 00:05:59.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:59.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.997 --rc genhtml_branch_coverage=1 00:05:59.997 --rc genhtml_function_coverage=1 00:05:59.997 --rc genhtml_legend=1 00:05:59.997 --rc geninfo_all_blocks=1 00:05:59.997 --rc geninfo_unexecuted_blocks=1 00:05:59.997 00:05:59.997 ' 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:59.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.997 --rc genhtml_branch_coverage=1 00:05:59.997 --rc genhtml_function_coverage=1 00:05:59.997 --rc genhtml_legend=1 00:05:59.997 --rc geninfo_all_blocks=1 00:05:59.997 --rc geninfo_unexecuted_blocks=1 00:05:59.997 00:05:59.997 ' 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:59.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.997 --rc genhtml_branch_coverage=1 00:05:59.997 --rc genhtml_function_coverage=1 00:05:59.997 --rc genhtml_legend=1 00:05:59.997 --rc geninfo_all_blocks=1 00:05:59.997 --rc geninfo_unexecuted_blocks=1 00:05:59.997 00:05:59.997 ' 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:59.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.997 --rc genhtml_branch_coverage=1 00:05:59.997 --rc genhtml_function_coverage=1 00:05:59.997 --rc genhtml_legend=1 00:05:59.997 --rc geninfo_all_blocks=1 00:05:59.997 --rc geninfo_unexecuted_blocks=1 00:05:59.997 00:05:59.997 ' 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.997 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:59.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:05:59.998 12:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:06.575 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:06.575 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:06.575 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:06.575 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:06.576 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:06.576 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:06.576 Found net devices under 0000:86:00.0: cvl_0_0 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:06.576 Found net devices under 0000:86:00.1: cvl_0_1 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:06.576 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:06.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:06.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:06:06.577 00:06:06.577 --- 10.0.0.2 ping statistics --- 00:06:06.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:06.577 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:06.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:06.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:06:06.577 00:06:06.577 --- 10.0.0.1 ping statistics --- 00:06:06.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:06.577 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2165557 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2165557 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2165557 ']' 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:06.577 [2024-11-18 12:49:03.692284] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:06.577 [2024-11-18 12:49:03.692334] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:06.577 [2024-11-18 12:49:03.772917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.577 [2024-11-18 12:49:03.816043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:06.577 [2024-11-18 12:49:03.816081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:06.577 [2024-11-18 12:49:03.816088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:06.577 [2024-11-18 12:49:03.816094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:06.577 [2024-11-18 12:49:03.816100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:06.577 [2024-11-18 12:49:03.817705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.577 [2024-11-18 12:49:03.817818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.577 [2024-11-18 12:49:03.817924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.577 [2024-11-18 12:49:03.817925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:06.577 [2024-11-18 12:49:03.953693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.577 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:06.577 Malloc0 00:06:06.577 [2024-11-18 12:49:04.027252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2165603 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2165603 /var/tmp/bdevperf.sock 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2165603 ']' 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:06.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:06.577 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:06.577 { 00:06:06.577 "params": { 00:06:06.577 "name": "Nvme$subsystem", 00:06:06.577 "trtype": "$TEST_TRANSPORT", 00:06:06.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:06.577 "adrfam": "ipv4", 00:06:06.577 "trsvcid": "$NVMF_PORT", 00:06:06.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:06.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:06.577 "hdgst": ${hdgst:-false}, 00:06:06.577 "ddgst": ${ddgst:-false} 00:06:06.577 }, 00:06:06.577 "method": "bdev_nvme_attach_controller" 00:06:06.577 } 00:06:06.578 EOF 00:06:06.578 )") 00:06:06.578 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:06.578 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:06.578 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:06.578 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:06.578 "params": { 00:06:06.578 "name": "Nvme0", 00:06:06.578 "trtype": "tcp", 00:06:06.578 "traddr": "10.0.0.2", 00:06:06.578 "adrfam": "ipv4", 00:06:06.578 "trsvcid": "4420", 00:06:06.578 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:06.578 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:06.578 "hdgst": false, 00:06:06.578 "ddgst": false 00:06:06.578 }, 00:06:06.578 "method": "bdev_nvme_attach_controller" 00:06:06.578 }' 00:06:06.578 [2024-11-18 12:49:04.125979] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:06.578 [2024-11-18 12:49:04.126022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2165603 ] 00:06:06.578 [2024-11-18 12:49:04.200457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.578 [2024-11-18 12:49:04.241643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.837 Running I/O for 10 seconds... 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:06.837 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.097 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:07.097 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:07.097 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=678 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 678 -ge 100 ']' 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.358 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:07.358 [2024-11-18 12:49:04.850240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.358 [2024-11-18 12:49:04.850278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.358 [2024-11-18 12:49:04.850294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.358 [2024-11-18 12:49:04.850302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.358 [2024-11-18 12:49:04.850311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.358 [2024-11-18 12:49:04.850319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.358 [2024-11-18 12:49:04.850328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.358 [2024-11-18 12:49:04.850335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.358 [2024-11-18 12:49:04.850343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.358 [2024-11-18 12:49:04.850350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.358 [2024-11-18 12:49:04.850364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.359 [2024-11-18 12:49:04.850974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.359 [2024-11-18 12:49:04.850982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.850989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.850997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.851284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.360 [2024-11-18 12:49:04.851291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.852270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:07.360 task offset: 101760 on job bdev=Nvme0n1 fails 00:06:07.360 00:06:07.360 Latency(us) 00:06:07.360 [2024-11-18T11:49:05.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:07.360 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:07.360 Job: Nvme0n1 ended in about 0.41 seconds with error 00:06:07.360 Verification LBA range: start 0x0 length 0x400 00:06:07.360 Nvme0n1 : 0.41 1886.35 117.90 157.20 0.00 30471.52 1567.17 27810.06 00:06:07.360 [2024-11-18T11:49:05.062Z] =================================================================================================================== 00:06:07.360 [2024-11-18T11:49:05.062Z] Total : 1886.35 117.90 157.20 0.00 30471.52 1567.17 27810.06 00:06:07.360 [2024-11-18 12:49:04.854679] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.360 [2024-11-18 12:49:04.854703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246f500 (9): Bad file descriptor 00:06:07.360 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.360 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:07.360 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.360 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:07.360 [2024-11-18 12:49:04.859815] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:07.360 [2024-11-18 12:49:04.859911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:07.360 [2024-11-18 12:49:04.859935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:07.360 [2024-11-18 12:49:04.859951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:07.360 [2024-11-18 12:49:04.859958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:07.360 [2024-11-18 12:49:04.859966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:07.360 [2024-11-18 12:49:04.859973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x246f500 00:06:07.360 [2024-11-18 12:49:04.859991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246f500 (9): Bad file descriptor 00:06:07.360 [2024-11-18 12:49:04.860002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:07.360 [2024-11-18 12:49:04.860009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:07.360 [2024-11-18 12:49:04.860022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:07.360 [2024-11-18 12:49:04.860031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:07.360 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.360 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:08.300 12:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2165603 00:06:08.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2165603) - No such process 00:06:08.300 12:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:08.300 12:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:08.300 12:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:08.300 12:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:08.300 12:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:08.300 12:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:08.300 12:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:08.300 12:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:08.300 { 00:06:08.300 "params": { 00:06:08.300 "name": "Nvme$subsystem", 00:06:08.300 "trtype": "$TEST_TRANSPORT", 00:06:08.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:08.300 "adrfam": "ipv4", 00:06:08.300 "trsvcid": "$NVMF_PORT", 00:06:08.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:08.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:08.300 "hdgst": ${hdgst:-false}, 00:06:08.300 "ddgst": ${ddgst:-false} 00:06:08.300 }, 00:06:08.300 "method": "bdev_nvme_attach_controller" 00:06:08.300 } 00:06:08.300 EOF 00:06:08.300 )") 00:06:08.300 12:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:08.300 12:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:08.300 12:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:08.300 12:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:08.300 "params": { 00:06:08.300 "name": "Nvme0", 00:06:08.300 "trtype": "tcp", 00:06:08.300 "traddr": "10.0.0.2", 00:06:08.300 "adrfam": "ipv4", 00:06:08.300 "trsvcid": "4420", 00:06:08.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:08.300 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:08.300 "hdgst": false, 00:06:08.300 "ddgst": false 00:06:08.300 }, 00:06:08.300 "method": "bdev_nvme_attach_controller" 00:06:08.300 }' 00:06:08.300 [2024-11-18 12:49:05.920100] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:08.300 [2024-11-18 12:49:05.920146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166069 ] 00:06:08.300 [2024-11-18 12:49:05.996916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.560 [2024-11-18 12:49:06.035965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.560 Running I/O for 1 seconds... 00:06:09.940 1984.00 IOPS, 124.00 MiB/s 00:06:09.940 Latency(us) 00:06:09.940 [2024-11-18T11:49:07.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:09.940 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:09.940 Verification LBA range: start 0x0 length 0x400 00:06:09.940 Nvme0n1 : 1.03 1992.86 124.55 0.00 0.00 31611.94 6696.07 27582.11 00:06:09.940 [2024-11-18T11:49:07.642Z] =================================================================================================================== 00:06:09.940 [2024-11-18T11:49:07.642Z] Total : 1992.86 124.55 0.00 0.00 31611.94 6696.07 27582.11 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:09.940 rmmod nvme_tcp 00:06:09.940 rmmod nvme_fabrics 00:06:09.940 rmmod nvme_keyring 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2165557 ']' 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2165557 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2165557 ']' 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2165557 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2165557 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2165557' 00:06:09.940 killing process with pid 2165557 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2165557 00:06:09.940 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2165557 00:06:10.200 [2024-11-18 12:49:07.666847] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:10.200 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:10.200 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:10.200 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:10.200 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:10.200 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:10.200 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:10.200 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:10.200 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:10.200 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:10.200 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.200 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.200 12:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:12.109 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:12.109 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:12.109 00:06:12.109 real 0m12.354s 00:06:12.109 user 0m19.409s 00:06:12.109 sys 0m5.537s 00:06:12.109 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:12.109 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.109 ************************************ 00:06:12.109 END TEST nvmf_host_management 00:06:12.109 ************************************ 00:06:12.109 12:49:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:12.109 12:49:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:12.109 12:49:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:12.109 12:49:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:12.369 ************************************ 00:06:12.369 START TEST nvmf_lvol 00:06:12.369 ************************************ 00:06:12.369 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:12.369 * Looking for test storage... 00:06:12.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:12.369 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:12.369 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:12.369 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:12.369 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:12.369 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.369 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.369 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.369 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:12.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.370 --rc genhtml_branch_coverage=1 00:06:12.370 --rc genhtml_function_coverage=1 00:06:12.370 --rc genhtml_legend=1 00:06:12.370 --rc geninfo_all_blocks=1 00:06:12.370 --rc geninfo_unexecuted_blocks=1 00:06:12.370 00:06:12.370 ' 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:12.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.370 --rc genhtml_branch_coverage=1 00:06:12.370 --rc genhtml_function_coverage=1 00:06:12.370 --rc genhtml_legend=1 00:06:12.370 --rc geninfo_all_blocks=1 00:06:12.370 --rc geninfo_unexecuted_blocks=1 00:06:12.370 00:06:12.370 ' 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:12.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.370 --rc genhtml_branch_coverage=1 00:06:12.370 --rc genhtml_function_coverage=1 00:06:12.370 --rc genhtml_legend=1 00:06:12.370 --rc geninfo_all_blocks=1 00:06:12.370 --rc geninfo_unexecuted_blocks=1 00:06:12.370 00:06:12.370 ' 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:12.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.370 --rc genhtml_branch_coverage=1 00:06:12.370 --rc genhtml_function_coverage=1 00:06:12.370 --rc genhtml_legend=1 00:06:12.370 --rc geninfo_all_blocks=1 00:06:12.370 --rc geninfo_unexecuted_blocks=1 00:06:12.370 00:06:12.370 ' 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:12.370 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:12.371 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.371 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:12.371 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:12.629 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:12.630 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:12.630 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:12.630 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:19.206 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:19.206 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:19.206 Found net devices under 0000:86:00.0: cvl_0_0 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:19.206 Found net devices under 0000:86:00.1: cvl_0_1 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:19.206 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:19.207 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:19.207 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:19.207 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:19.207 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:19.207 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:19.207 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:19.207 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:19.207 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:19.207 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:19.207 12:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:19.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:06:19.207 00:06:19.207 --- 10.0.0.2 ping statistics --- 00:06:19.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.207 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:19.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:06:19.207 00:06:19.207 --- 10.0.0.1 ping statistics --- 00:06:19.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.207 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2169845 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2169845 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2169845 ']' 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:19.207 [2024-11-18 12:49:16.139538] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:19.207 [2024-11-18 12:49:16.139590] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.207 [2024-11-18 12:49:16.217945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.207 [2024-11-18 12:49:16.260958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.207 [2024-11-18 12:49:16.260994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.207 [2024-11-18 12:49:16.261004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.207 [2024-11-18 12:49:16.261011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.207 [2024-11-18 12:49:16.261017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.207 [2024-11-18 12:49:16.262366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.207 [2024-11-18 12:49:16.262462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.207 [2024-11-18 12:49:16.262463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:19.207 [2024-11-18 12:49:16.563363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:19.207 12:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:19.467 12:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:19.467 12:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:19.726 12:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:19.986 12:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7f57f698-d944-4006-99b5-fce106ea17b6 00:06:19.986 12:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7f57f698-d944-4006-99b5-fce106ea17b6 lvol 20 00:06:19.986 12:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4bfb1dcd-e006-4a33-87f7-f3a2e6f3300b 00:06:19.986 12:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:20.245 12:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4bfb1dcd-e006-4a33-87f7-f3a2e6f3300b 00:06:20.505 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:20.764 [2024-11-18 12:49:18.251110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.764 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:21.024 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2170301 00:06:21.024 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:21.024 12:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:21.967 12:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 4bfb1dcd-e006-4a33-87f7-f3a2e6f3300b MY_SNAPSHOT 00:06:22.226 12:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a74c0e93-2e21-42c5-9340-292111e6db1e 00:06:22.226 12:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 4bfb1dcd-e006-4a33-87f7-f3a2e6f3300b 30 00:06:22.485 12:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a74c0e93-2e21-42c5-9340-292111e6db1e MY_CLONE 00:06:22.744 12:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=414d97f8-a237-44da-8300-127ef116791d 00:06:22.744 12:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 414d97f8-a237-44da-8300-127ef116791d 00:06:23.312 12:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2170301 00:06:31.438 Initializing NVMe Controllers 00:06:31.438 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:31.438 Controller IO queue size 128, less than required. 00:06:31.438 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:31.438 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:31.438 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:31.438 Initialization complete. Launching workers. 00:06:31.438 ======================================================== 00:06:31.438 Latency(us) 00:06:31.438 Device Information : IOPS MiB/s Average min max 00:06:31.438 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12121.10 47.35 10565.18 1910.59 63237.15 00:06:31.438 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12017.90 46.94 10652.85 3740.86 61418.32 00:06:31.438 ======================================================== 00:06:31.438 Total : 24139.00 94.29 10608.83 1910.59 63237.15 00:06:31.438 00:06:31.438 12:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:31.438 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4bfb1dcd-e006-4a33-87f7-f3a2e6f3300b 00:06:31.704 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7f57f698-d944-4006-99b5-fce106ea17b6 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:31.964 rmmod nvme_tcp 00:06:31.964 rmmod nvme_fabrics 00:06:31.964 rmmod nvme_keyring 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2169845 ']' 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2169845 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2169845 ']' 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2169845 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2169845 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2169845' 00:06:31.964 killing process with pid 2169845 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2169845 00:06:31.964 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2169845 00:06:32.224 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:32.224 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:32.224 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:32.224 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:32.224 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:32.224 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:32.224 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:32.224 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:32.224 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:32.224 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.224 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:32.224 12:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.767 12:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:34.767 00:06:34.767 real 0m22.077s 00:06:34.767 user 1m3.579s 00:06:34.767 sys 0m7.604s 00:06:34.767 12:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:34.767 12:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:34.767 ************************************ 00:06:34.767 END TEST nvmf_lvol 00:06:34.767 ************************************ 00:06:34.767 12:49:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:34.767 12:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:34.767 12:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:34.767 12:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:34.767 ************************************ 00:06:34.767 START TEST nvmf_lvs_grow 00:06:34.767 ************************************ 00:06:34.767 12:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:34.767 * Looking for test storage... 00:06:34.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:34.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.767 --rc genhtml_branch_coverage=1 00:06:34.767 --rc genhtml_function_coverage=1 00:06:34.767 --rc genhtml_legend=1 00:06:34.767 --rc geninfo_all_blocks=1 00:06:34.767 --rc geninfo_unexecuted_blocks=1 00:06:34.767 00:06:34.767 ' 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:34.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.767 --rc genhtml_branch_coverage=1 00:06:34.767 --rc genhtml_function_coverage=1 00:06:34.767 --rc genhtml_legend=1 00:06:34.767 --rc geninfo_all_blocks=1 00:06:34.767 --rc geninfo_unexecuted_blocks=1 00:06:34.767 00:06:34.767 ' 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:34.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.767 --rc genhtml_branch_coverage=1 00:06:34.767 --rc genhtml_function_coverage=1 00:06:34.767 --rc genhtml_legend=1 00:06:34.767 --rc geninfo_all_blocks=1 00:06:34.767 --rc geninfo_unexecuted_blocks=1 00:06:34.767 00:06:34.767 ' 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:34.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.767 --rc genhtml_branch_coverage=1 00:06:34.767 --rc genhtml_function_coverage=1 00:06:34.767 --rc genhtml_legend=1 00:06:34.767 --rc geninfo_all_blocks=1 00:06:34.767 --rc geninfo_unexecuted_blocks=1 00:06:34.767 00:06:34.767 ' 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.767 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:34.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:34.768 12:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:41.345 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:41.345 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:41.346 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:41.346 Found net devices under 0000:86:00.0: cvl_0_0 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:41.346 Found net devices under 0000:86:00.1: cvl_0_1 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:41.346 12:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:41.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:41.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:06:41.346 00:06:41.346 --- 10.0.0.2 ping statistics --- 00:06:41.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.346 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:41.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:41.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:06:41.346 00:06:41.346 --- 10.0.0.1 ping statistics --- 00:06:41.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.346 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2175729 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2175729 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2175729 ']' 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:41.346 [2024-11-18 12:49:38.278104] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:41.346 [2024-11-18 12:49:38.278154] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.346 [2024-11-18 12:49:38.355802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.346 [2024-11-18 12:49:38.396234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:41.346 [2024-11-18 12:49:38.396271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:41.346 [2024-11-18 12:49:38.396279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:41.346 [2024-11-18 12:49:38.396285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:41.346 [2024-11-18 12:49:38.396290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:41.346 [2024-11-18 12:49:38.396864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:41.346 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:41.347 [2024-11-18 12:49:38.704545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:41.347 ************************************ 00:06:41.347 START TEST lvs_grow_clean 00:06:41.347 ************************************ 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:41.347 12:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:41.609 12:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fff4bc99-01fc-4d3f-9c60-00fa3bf1df52 00:06:41.609 12:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fff4bc99-01fc-4d3f-9c60-00fa3bf1df52 00:06:41.609 12:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:41.877 12:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:41.877 12:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:41.877 12:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fff4bc99-01fc-4d3f-9c60-00fa3bf1df52 lvol 150 00:06:42.135 12:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c7c4d0c9-a079-4f48-a448-46e3c7a597cb 00:06:42.135 12:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:42.135 12:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:42.135 [2024-11-18 12:49:39.754366] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:42.135 [2024-11-18 12:49:39.754418] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:42.135 true 00:06:42.135 12:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fff4bc99-01fc-4d3f-9c60-00fa3bf1df52 00:06:42.135 12:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:42.395 12:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:42.395 12:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:42.654 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c7c4d0c9-a079-4f48-a448-46e3c7a597cb 00:06:42.654 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:42.914 [2024-11-18 12:49:40.520698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.914 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:43.174 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2176164 00:06:43.174 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:43.174 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:43.174 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2176164 /var/tmp/bdevperf.sock 00:06:43.174 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2176164 ']' 00:06:43.174 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:43.174 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:43.174 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:43.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:43.174 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:43.174 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:43.174 [2024-11-18 12:49:40.758240] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:43.174 [2024-11-18 12:49:40.758290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176164 ] 00:06:43.174 [2024-11-18 12:49:40.832397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.433 [2024-11-18 12:49:40.875328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.433 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:43.433 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:06:43.433 12:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:43.691 Nvme0n1 00:06:43.691 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:43.950 [ 00:06:43.950 { 00:06:43.950 "name": "Nvme0n1", 00:06:43.950 "aliases": [ 00:06:43.950 "c7c4d0c9-a079-4f48-a448-46e3c7a597cb" 00:06:43.950 ], 00:06:43.950 "product_name": "NVMe disk", 00:06:43.950 "block_size": 4096, 00:06:43.950 "num_blocks": 38912, 00:06:43.950 "uuid": "c7c4d0c9-a079-4f48-a448-46e3c7a597cb", 00:06:43.950 "numa_id": 1, 00:06:43.950 "assigned_rate_limits": { 00:06:43.950 "rw_ios_per_sec": 0, 00:06:43.950 "rw_mbytes_per_sec": 0, 00:06:43.950 "r_mbytes_per_sec": 0, 00:06:43.950 "w_mbytes_per_sec": 0 00:06:43.950 }, 00:06:43.950 "claimed": false, 00:06:43.950 "zoned": false, 00:06:43.950 "supported_io_types": { 00:06:43.950 "read": true, 00:06:43.950 "write": true, 00:06:43.950 "unmap": true, 00:06:43.950 "flush": true, 00:06:43.950 "reset": true, 00:06:43.950 "nvme_admin": true, 00:06:43.950 "nvme_io": true, 00:06:43.950 "nvme_io_md": false, 00:06:43.950 "write_zeroes": true, 00:06:43.950 "zcopy": false, 00:06:43.950 "get_zone_info": false, 00:06:43.950 "zone_management": false, 00:06:43.950 "zone_append": false, 00:06:43.950 "compare": true, 00:06:43.950 "compare_and_write": true, 00:06:43.950 "abort": true, 00:06:43.950 "seek_hole": false, 00:06:43.950 "seek_data": false, 00:06:43.950 "copy": true, 00:06:43.951 "nvme_iov_md": false 00:06:43.951 }, 00:06:43.951 "memory_domains": [ 00:06:43.951 { 00:06:43.951 "dma_device_id": "system", 00:06:43.951 "dma_device_type": 1 00:06:43.951 } 00:06:43.951 ], 00:06:43.951 "driver_specific": { 00:06:43.951 "nvme": [ 00:06:43.951 { 00:06:43.951 "trid": { 00:06:43.951 "trtype": "TCP", 00:06:43.951 "adrfam": "IPv4", 00:06:43.951 "traddr": "10.0.0.2", 00:06:43.951 "trsvcid": "4420", 00:06:43.951 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:43.951 }, 00:06:43.951 "ctrlr_data": { 00:06:43.951 "cntlid": 1, 00:06:43.951 "vendor_id": "0x8086", 00:06:43.951 "model_number": "SPDK bdev Controller", 00:06:43.951 "serial_number": "SPDK0", 00:06:43.951 "firmware_revision": "25.01", 00:06:43.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:43.951 "oacs": { 00:06:43.951 "security": 0, 00:06:43.951 "format": 0, 00:06:43.951 "firmware": 0, 00:06:43.951 "ns_manage": 0 00:06:43.951 }, 00:06:43.951 "multi_ctrlr": true, 00:06:43.951 "ana_reporting": false 00:06:43.951 }, 00:06:43.951 "vs": { 00:06:43.951 "nvme_version": "1.3" 00:06:43.951 }, 00:06:43.951 "ns_data": { 00:06:43.951 "id": 1, 00:06:43.951 "can_share": true 00:06:43.951 } 00:06:43.951 } 00:06:43.951 ], 00:06:43.951 "mp_policy": "active_passive" 00:06:43.951 } 00:06:43.951 } 00:06:43.951 ] 00:06:43.951 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2176239 00:06:43.951 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:43.951 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:43.951 Running I/O for 10 seconds... 00:06:45.331 Latency(us) 00:06:45.331 [2024-11-18T11:49:43.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:45.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:45.331 Nvme0n1 : 1.00 22799.00 89.06 0.00 0.00 0.00 0.00 0.00 00:06:45.331 [2024-11-18T11:49:43.033Z] =================================================================================================================== 00:06:45.331 [2024-11-18T11:49:43.033Z] Total : 22799.00 89.06 0.00 0.00 0.00 0.00 0.00 00:06:45.331 00:06:45.901 12:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fff4bc99-01fc-4d3f-9c60-00fa3bf1df52 00:06:46.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:46.159 Nvme0n1 : 2.00 22966.50 89.71 0.00 0.00 0.00 0.00 0.00 00:06:46.159 [2024-11-18T11:49:43.861Z] =================================================================================================================== 00:06:46.159 [2024-11-18T11:49:43.861Z] Total : 22966.50 89.71 0.00 0.00 0.00 0.00 0.00 00:06:46.159 00:06:46.159 true 00:06:46.159 12:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fff4bc99-01fc-4d3f-9c60-00fa3bf1df52 00:06:46.159 12:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:46.419 12:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:46.419 12:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:46.419 12:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2176239 00:06:47.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:47.011 Nvme0n1 : 3.00 23010.67 89.89 0.00 0.00 0.00 0.00 0.00 00:06:47.011 [2024-11-18T11:49:44.713Z] =================================================================================================================== 00:06:47.011 [2024-11-18T11:49:44.713Z] Total : 23010.67 89.89 0.00 0.00 0.00 0.00 0.00 00:06:47.011 00:06:48.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:48.054 Nvme0n1 : 4.00 23050.25 90.04 0.00 0.00 0.00 0.00 0.00 00:06:48.054 [2024-11-18T11:49:45.756Z] =================================================================================================================== 00:06:48.054 [2024-11-18T11:49:45.756Z] Total : 23050.25 90.04 0.00 0.00 0.00 0.00 0.00 00:06:48.054 00:06:49.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.007 Nvme0n1 : 5.00 23094.40 90.21 0.00 0.00 0.00 0.00 0.00 00:06:49.007 [2024-11-18T11:49:46.709Z] =================================================================================================================== 00:06:49.007 [2024-11-18T11:49:46.709Z] Total : 23094.40 90.21 0.00 0.00 0.00 0.00 0.00 00:06:49.007 00:06:49.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.984 Nvme0n1 : 6.00 23141.00 90.39 0.00 0.00 0.00 0.00 0.00 00:06:49.984 [2024-11-18T11:49:47.686Z] =================================================================================================================== 00:06:49.984 [2024-11-18T11:49:47.686Z] Total : 23141.00 90.39 0.00 0.00 0.00 0.00 0.00 00:06:49.984 00:06:50.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.979 Nvme0n1 : 7.00 23165.57 90.49 0.00 0.00 0.00 0.00 0.00 00:06:50.979 [2024-11-18T11:49:48.681Z] =================================================================================================================== 00:06:50.979 [2024-11-18T11:49:48.681Z] Total : 23165.57 90.49 0.00 0.00 0.00 0.00 0.00 00:06:50.979 00:06:51.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.987 Nvme0n1 : 8.00 23182.25 90.56 0.00 0.00 0.00 0.00 0.00 00:06:51.987 [2024-11-18T11:49:49.689Z] =================================================================================================================== 00:06:51.987 [2024-11-18T11:49:49.689Z] Total : 23182.25 90.56 0.00 0.00 0.00 0.00 0.00 00:06:51.987 00:06:52.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.966 Nvme0n1 : 9.00 23153.33 90.44 0.00 0.00 0.00 0.00 0.00 00:06:52.966 [2024-11-18T11:49:50.668Z] =================================================================================================================== 00:06:52.966 [2024-11-18T11:49:50.668Z] Total : 23153.33 90.44 0.00 0.00 0.00 0.00 0.00 00:06:52.966 00:06:53.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.955 Nvme0n1 : 10.00 23178.90 90.54 0.00 0.00 0.00 0.00 0.00 00:06:53.955 [2024-11-18T11:49:51.657Z] =================================================================================================================== 00:06:53.955 [2024-11-18T11:49:51.657Z] Total : 23178.90 90.54 0.00 0.00 0.00 0.00 0.00 00:06:53.955 00:06:53.955 00:06:53.955 Latency(us) 00:06:53.955 [2024-11-18T11:49:51.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:53.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.955 Nvme0n1 : 10.00 23180.56 90.55 0.00 0.00 5519.01 3262.55 12480.33 00:06:53.955 [2024-11-18T11:49:51.657Z] =================================================================================================================== 00:06:53.955 [2024-11-18T11:49:51.657Z] Total : 23180.56 90.55 0.00 0.00 5519.01 3262.55 12480.33 00:06:53.955 { 00:06:53.955 "results": [ 00:06:53.955 { 00:06:53.955 "job": "Nvme0n1", 00:06:53.955 "core_mask": "0x2", 00:06:53.955 "workload": "randwrite", 00:06:53.955 "status": "finished", 00:06:53.955 "queue_depth": 128, 00:06:53.955 "io_size": 4096, 00:06:53.955 "runtime": 10.004806, 00:06:53.955 "iops": 23180.55942314124, 00:06:53.955 "mibps": 90.54906024664547, 00:06:53.955 "io_failed": 0, 00:06:53.955 "io_timeout": 0, 00:06:53.955 "avg_latency_us": 5519.010469495178, 00:06:53.955 "min_latency_us": 3262.553043478261, 00:06:53.955 "max_latency_us": 12480.333913043478 00:06:53.955 } 00:06:53.955 ], 00:06:53.955 "core_count": 1 00:06:53.955 } 00:06:53.955 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2176164 00:06:53.955 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2176164 ']' 00:06:54.275 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2176164 00:06:54.275 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:06:54.275 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:54.275 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2176164 00:06:54.275 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:54.275 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:54.275 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2176164' 00:06:54.275 killing process with pid 2176164 00:06:54.275 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2176164 00:06:54.275 Received shutdown signal, test time was about 10.000000 seconds 00:06:54.275 00:06:54.275 Latency(us) 00:06:54.275 [2024-11-18T11:49:51.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.275 [2024-11-18T11:49:51.977Z] =================================================================================================================== 00:06:54.275 [2024-11-18T11:49:51.977Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:54.275 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2176164 00:06:54.275 12:49:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:54.547 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:54.819 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fff4bc99-01fc-4d3f-9c60-00fa3bf1df52 00:06:54.819 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:54.819 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:54.819 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:54.819 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:55.095 [2024-11-18 12:49:52.632046] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:55.095 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fff4bc99-01fc-4d3f-9c60-00fa3bf1df52 00:06:55.095 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:06:55.095 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fff4bc99-01fc-4d3f-9c60-00fa3bf1df52 00:06:55.095 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:55.095 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.095 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:55.095 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.095 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:55.095 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.095 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:55.095 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:55.095 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fff4bc99-01fc-4d3f-9c60-00fa3bf1df52 00:06:55.384 request: 00:06:55.384 { 00:06:55.384 "uuid": "fff4bc99-01fc-4d3f-9c60-00fa3bf1df52", 00:06:55.384 "method": "bdev_lvol_get_lvstores", 00:06:55.384 "req_id": 1 00:06:55.384 } 00:06:55.384 Got JSON-RPC error response 00:06:55.384 response: 00:06:55.384 { 00:06:55.384 "code": -19, 00:06:55.384 "message": "No such device" 00:06:55.384 } 00:06:55.384 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:06:55.384 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:55.384 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:55.384 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:55.384 12:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:55.384 aio_bdev 00:06:55.384 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c7c4d0c9-a079-4f48-a448-46e3c7a597cb 00:06:55.384 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=c7c4d0c9-a079-4f48-a448-46e3c7a597cb 00:06:55.384 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:55.384 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:06:55.384 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:55.384 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:55.384 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:55.674 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c7c4d0c9-a079-4f48-a448-46e3c7a597cb -t 2000 00:06:55.977 [ 00:06:55.977 { 00:06:55.977 "name": "c7c4d0c9-a079-4f48-a448-46e3c7a597cb", 00:06:55.977 "aliases": [ 00:06:55.977 "lvs/lvol" 00:06:55.977 ], 00:06:55.977 "product_name": "Logical Volume", 00:06:55.977 "block_size": 4096, 00:06:55.977 "num_blocks": 38912, 00:06:55.977 "uuid": "c7c4d0c9-a079-4f48-a448-46e3c7a597cb", 00:06:55.977 "assigned_rate_limits": { 00:06:55.977 "rw_ios_per_sec": 0, 00:06:55.977 "rw_mbytes_per_sec": 0, 00:06:55.977 "r_mbytes_per_sec": 0, 00:06:55.977 "w_mbytes_per_sec": 0 00:06:55.977 }, 00:06:55.977 "claimed": false, 00:06:55.977 "zoned": false, 00:06:55.977 "supported_io_types": { 00:06:55.977 "read": true, 00:06:55.977 "write": true, 00:06:55.977 "unmap": true, 00:06:55.977 "flush": false, 00:06:55.977 "reset": true, 00:06:55.977 "nvme_admin": false, 00:06:55.977 "nvme_io": false, 00:06:55.977 "nvme_io_md": false, 00:06:55.977 "write_zeroes": true, 00:06:55.977 "zcopy": false, 00:06:55.977 "get_zone_info": false, 00:06:55.977 "zone_management": false, 00:06:55.977 "zone_append": false, 00:06:55.978 "compare": false, 00:06:55.978 "compare_and_write": false, 00:06:55.978 "abort": false, 00:06:55.978 "seek_hole": true, 00:06:55.978 "seek_data": true, 00:06:55.978 "copy": false, 00:06:55.978 "nvme_iov_md": false 00:06:55.978 }, 00:06:55.978 "driver_specific": { 00:06:55.978 "lvol": { 00:06:55.978 "lvol_store_uuid": "fff4bc99-01fc-4d3f-9c60-00fa3bf1df52", 00:06:55.978 "base_bdev": "aio_bdev", 00:06:55.978 "thin_provision": false, 00:06:55.978 "num_allocated_clusters": 38, 00:06:55.978 "snapshot": false, 00:06:55.978 "clone": false, 00:06:55.978 "esnap_clone": false 00:06:55.978 } 00:06:55.978 } 00:06:55.978 } 00:06:55.978 ] 00:06:55.978 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:06:55.978 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fff4bc99-01fc-4d3f-9c60-00fa3bf1df52 00:06:55.978 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:55.978 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:55.978 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fff4bc99-01fc-4d3f-9c60-00fa3bf1df52 00:06:55.978 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:56.267 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:56.267 12:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c7c4d0c9-a079-4f48-a448-46e3c7a597cb 00:06:56.535 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fff4bc99-01fc-4d3f-9c60-00fa3bf1df52 00:06:56.535 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:56.813 00:06:56.813 real 0m15.621s 00:06:56.813 user 0m15.176s 00:06:56.813 sys 0m1.535s 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:56.813 ************************************ 00:06:56.813 END TEST lvs_grow_clean 00:06:56.813 ************************************ 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:56.813 ************************************ 00:06:56.813 START TEST lvs_grow_dirty 00:06:56.813 ************************************ 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:56.813 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:57.091 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:57.091 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:57.360 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=86c5485e-86b2-4e70-90c6-0e0acab44c2f 00:06:57.360 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86c5485e-86b2-4e70-90c6-0e0acab44c2f 00:06:57.360 12:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:57.620 12:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:57.620 12:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:57.620 12:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 86c5485e-86b2-4e70-90c6-0e0acab44c2f lvol 150 00:06:57.620 12:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f99aee01-c679-43a0-881e-ce04d4347eea 00:06:57.620 12:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:57.620 12:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:57.879 [2024-11-18 12:49:55.444007] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:57.879 [2024-11-18 12:49:55.444060] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:57.879 true 00:06:57.879 12:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86c5485e-86b2-4e70-90c6-0e0acab44c2f 00:06:57.879 12:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:58.139 12:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:58.139 12:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:58.398 12:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f99aee01-c679-43a0-881e-ce04d4347eea 00:06:58.398 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:58.658 [2024-11-18 12:49:56.182200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:58.658 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:58.918 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:58.918 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2178872 00:06:58.918 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:58.918 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2178872 /var/tmp/bdevperf.sock 00:06:58.918 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2178872 ']' 00:06:58.918 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:58.918 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:58.918 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:58.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:58.918 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:58.918 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:58.918 [2024-11-18 12:49:56.403616] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:58.918 [2024-11-18 12:49:56.403662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178872 ] 00:06:58.918 [2024-11-18 12:49:56.479221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.918 [2024-11-18 12:49:56.519643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.178 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:59.178 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:06:59.178 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:59.437 Nvme0n1 00:06:59.437 12:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:59.696 [ 00:06:59.696 { 00:06:59.696 "name": "Nvme0n1", 00:06:59.696 "aliases": [ 00:06:59.696 "f99aee01-c679-43a0-881e-ce04d4347eea" 00:06:59.696 ], 00:06:59.696 "product_name": "NVMe disk", 00:06:59.696 "block_size": 4096, 00:06:59.696 "num_blocks": 38912, 00:06:59.696 "uuid": "f99aee01-c679-43a0-881e-ce04d4347eea", 00:06:59.696 "numa_id": 1, 00:06:59.696 "assigned_rate_limits": { 00:06:59.696 "rw_ios_per_sec": 0, 00:06:59.696 "rw_mbytes_per_sec": 0, 00:06:59.696 "r_mbytes_per_sec": 0, 00:06:59.696 "w_mbytes_per_sec": 0 00:06:59.696 }, 00:06:59.696 "claimed": false, 00:06:59.696 "zoned": false, 00:06:59.696 "supported_io_types": { 00:06:59.696 "read": true, 00:06:59.696 "write": true, 00:06:59.696 "unmap": true, 00:06:59.696 "flush": true, 00:06:59.696 "reset": true, 00:06:59.696 "nvme_admin": true, 00:06:59.696 "nvme_io": true, 00:06:59.696 "nvme_io_md": false, 00:06:59.696 "write_zeroes": true, 00:06:59.696 "zcopy": false, 00:06:59.696 "get_zone_info": false, 00:06:59.696 "zone_management": false, 00:06:59.696 "zone_append": false, 00:06:59.696 "compare": true, 00:06:59.696 "compare_and_write": true, 00:06:59.696 "abort": true, 00:06:59.696 "seek_hole": false, 00:06:59.696 "seek_data": false, 00:06:59.696 "copy": true, 00:06:59.697 "nvme_iov_md": false 00:06:59.697 }, 00:06:59.697 "memory_domains": [ 00:06:59.697 { 00:06:59.697 "dma_device_id": "system", 00:06:59.697 "dma_device_type": 1 00:06:59.697 } 00:06:59.697 ], 00:06:59.697 "driver_specific": { 00:06:59.697 "nvme": [ 00:06:59.697 { 00:06:59.697 "trid": { 00:06:59.697 "trtype": "TCP", 00:06:59.697 "adrfam": "IPv4", 00:06:59.697 "traddr": "10.0.0.2", 00:06:59.697 "trsvcid": "4420", 00:06:59.697 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:59.697 }, 00:06:59.697 "ctrlr_data": { 00:06:59.697 "cntlid": 1, 00:06:59.697 "vendor_id": "0x8086", 00:06:59.697 "model_number": "SPDK bdev Controller", 00:06:59.697 "serial_number": "SPDK0", 00:06:59.697 "firmware_revision": "25.01", 00:06:59.697 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:59.697 "oacs": { 00:06:59.697 "security": 0, 00:06:59.697 "format": 0, 00:06:59.697 "firmware": 0, 00:06:59.697 "ns_manage": 0 00:06:59.697 }, 00:06:59.697 "multi_ctrlr": true, 00:06:59.697 "ana_reporting": false 00:06:59.697 }, 00:06:59.697 "vs": { 00:06:59.697 "nvme_version": "1.3" 00:06:59.697 }, 00:06:59.697 "ns_data": { 00:06:59.697 "id": 1, 00:06:59.697 "can_share": true 00:06:59.697 } 00:06:59.697 } 00:06:59.697 ], 00:06:59.697 "mp_policy": "active_passive" 00:06:59.697 } 00:06:59.697 } 00:06:59.697 ] 00:06:59.697 12:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:59.697 12:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2178954 00:06:59.697 12:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:59.697 Running I/O for 10 seconds... 00:07:00.633 Latency(us) 00:07:00.633 [2024-11-18T11:49:58.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.633 Nvme0n1 : 1.00 22811.00 89.11 0.00 0.00 0.00 0.00 0.00 00:07:00.633 [2024-11-18T11:49:58.335Z] =================================================================================================================== 00:07:00.633 [2024-11-18T11:49:58.335Z] Total : 22811.00 89.11 0.00 0.00 0.00 0.00 0.00 00:07:00.633 00:07:01.641 12:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 86c5485e-86b2-4e70-90c6-0e0acab44c2f 00:07:01.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.641 Nvme0n1 : 2.00 22948.00 89.64 0.00 0.00 0.00 0.00 0.00 00:07:01.641 [2024-11-18T11:49:59.343Z] =================================================================================================================== 00:07:01.641 [2024-11-18T11:49:59.343Z] Total : 22948.00 89.64 0.00 0.00 0.00 0.00 0.00 00:07:01.641 00:07:01.918 true 00:07:01.918 12:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86c5485e-86b2-4e70-90c6-0e0acab44c2f 00:07:01.918 12:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:02.178 12:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:02.178 12:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:02.178 12:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2178954 00:07:02.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.746 Nvme0n1 : 3.00 22912.00 89.50 0.00 0.00 0.00 0.00 0.00 00:07:02.746 [2024-11-18T11:50:00.448Z] =================================================================================================================== 00:07:02.746 [2024-11-18T11:50:00.448Z] Total : 22912.00 89.50 0.00 0.00 0.00 0.00 0.00 00:07:02.746 00:07:03.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.684 Nvme0n1 : 4.00 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:07:03.684 [2024-11-18T11:50:01.386Z] =================================================================================================================== 00:07:03.684 [2024-11-18T11:50:01.386Z] Total : 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:07:03.684 00:07:05.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.062 Nvme0n1 : 5.00 23012.40 89.89 0.00 0.00 0.00 0.00 0.00 00:07:05.062 [2024-11-18T11:50:02.764Z] =================================================================================================================== 00:07:05.062 [2024-11-18T11:50:02.764Z] Total : 23012.40 89.89 0.00 0.00 0.00 0.00 0.00 00:07:05.062 00:07:06.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.001 Nvme0n1 : 6.00 23061.83 90.09 0.00 0.00 0.00 0.00 0.00 00:07:06.001 [2024-11-18T11:50:03.703Z] =================================================================================================================== 00:07:06.001 [2024-11-18T11:50:03.703Z] Total : 23061.83 90.09 0.00 0.00 0.00 0.00 0.00 00:07:06.001 00:07:06.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.941 Nvme0n1 : 7.00 23102.14 90.24 0.00 0.00 0.00 0.00 0.00 00:07:06.941 [2024-11-18T11:50:04.643Z] =================================================================================================================== 00:07:06.941 [2024-11-18T11:50:04.643Z] Total : 23102.14 90.24 0.00 0.00 0.00 0.00 0.00 00:07:06.941 00:07:07.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.879 Nvme0n1 : 8.00 23140.25 90.39 0.00 0.00 0.00 0.00 0.00 00:07:07.879 [2024-11-18T11:50:05.581Z] =================================================================================================================== 00:07:07.879 [2024-11-18T11:50:05.581Z] Total : 23140.25 90.39 0.00 0.00 0.00 0.00 0.00 00:07:07.879 00:07:08.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.818 Nvme0n1 : 9.00 23166.78 90.50 0.00 0.00 0.00 0.00 0.00 00:07:08.818 [2024-11-18T11:50:06.520Z] =================================================================================================================== 00:07:08.818 [2024-11-18T11:50:06.520Z] Total : 23166.78 90.50 0.00 0.00 0.00 0.00 0.00 00:07:08.818 00:07:09.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.758 Nvme0n1 : 10.00 23189.60 90.58 0.00 0.00 0.00 0.00 0.00 00:07:09.758 [2024-11-18T11:50:07.460Z] =================================================================================================================== 00:07:09.758 [2024-11-18T11:50:07.460Z] Total : 23189.60 90.58 0.00 0.00 0.00 0.00 0.00 00:07:09.758 00:07:09.758 00:07:09.758 Latency(us) 00:07:09.758 [2024-11-18T11:50:07.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.759 Nvme0n1 : 10.00 23193.80 90.60 0.00 0.00 5515.86 2692.67 13107.20 00:07:09.759 [2024-11-18T11:50:07.461Z] =================================================================================================================== 00:07:09.759 [2024-11-18T11:50:07.461Z] Total : 23193.80 90.60 0.00 0.00 5515.86 2692.67 13107.20 00:07:09.759 { 00:07:09.759 "results": [ 00:07:09.759 { 00:07:09.759 "job": "Nvme0n1", 00:07:09.759 "core_mask": "0x2", 00:07:09.759 "workload": "randwrite", 00:07:09.759 "status": "finished", 00:07:09.759 "queue_depth": 128, 00:07:09.759 "io_size": 4096, 00:07:09.759 "runtime": 10.003707, 00:07:09.759 "iops": 23193.802057577257, 00:07:09.759 "mibps": 90.60078928741116, 00:07:09.759 "io_failed": 0, 00:07:09.759 "io_timeout": 0, 00:07:09.759 "avg_latency_us": 5515.85535025612, 00:07:09.759 "min_latency_us": 2692.6747826086958, 00:07:09.759 "max_latency_us": 13107.2 00:07:09.759 } 00:07:09.759 ], 00:07:09.759 "core_count": 1 00:07:09.759 } 00:07:09.759 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2178872 00:07:09.759 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2178872 ']' 00:07:09.759 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2178872 00:07:09.759 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:09.759 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:09.759 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2178872 00:07:09.759 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:09.759 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:09.759 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2178872' 00:07:09.759 killing process with pid 2178872 00:07:09.759 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2178872 00:07:09.759 Received shutdown signal, test time was about 10.000000 seconds 00:07:09.759 00:07:09.759 Latency(us) 00:07:09.759 [2024-11-18T11:50:07.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.759 [2024-11-18T11:50:07.461Z] =================================================================================================================== 00:07:09.759 [2024-11-18T11:50:07.461Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:09.759 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2178872 00:07:10.018 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:10.278 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:10.538 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86c5485e-86b2-4e70-90c6-0e0acab44c2f 00:07:10.538 12:50:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2175729 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2175729 00:07:10.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2175729 Killed "${NVMF_APP[@]}" "$@" 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2180744 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2180744 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2180744 ']' 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:10.538 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:10.798 [2024-11-18 12:50:08.273740] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:10.798 [2024-11-18 12:50:08.273788] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.798 [2024-11-18 12:50:08.352900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.798 [2024-11-18 12:50:08.394021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.798 [2024-11-18 12:50:08.394059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.798 [2024-11-18 12:50:08.394067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.798 [2024-11-18 12:50:08.394073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.798 [2024-11-18 12:50:08.394078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.798 [2024-11-18 12:50:08.394650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.798 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:10.798 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:10.798 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:10.798 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.798 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:11.058 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.058 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:11.058 [2024-11-18 12:50:08.697076] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:11.058 [2024-11-18 12:50:08.697176] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:11.058 [2024-11-18 12:50:08.697202] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:11.058 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:11.058 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f99aee01-c679-43a0-881e-ce04d4347eea 00:07:11.058 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=f99aee01-c679-43a0-881e-ce04d4347eea 00:07:11.058 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:11.058 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:11.058 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:11.058 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:11.058 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:11.318 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f99aee01-c679-43a0-881e-ce04d4347eea -t 2000 00:07:11.578 [ 00:07:11.578 { 00:07:11.578 "name": "f99aee01-c679-43a0-881e-ce04d4347eea", 00:07:11.578 "aliases": [ 00:07:11.578 "lvs/lvol" 00:07:11.578 ], 00:07:11.578 "product_name": "Logical Volume", 00:07:11.578 "block_size": 4096, 00:07:11.578 "num_blocks": 38912, 00:07:11.578 "uuid": "f99aee01-c679-43a0-881e-ce04d4347eea", 00:07:11.578 "assigned_rate_limits": { 00:07:11.578 "rw_ios_per_sec": 0, 00:07:11.578 "rw_mbytes_per_sec": 0, 00:07:11.578 "r_mbytes_per_sec": 0, 00:07:11.578 "w_mbytes_per_sec": 0 00:07:11.578 }, 00:07:11.578 "claimed": false, 00:07:11.578 "zoned": false, 00:07:11.578 "supported_io_types": { 00:07:11.578 "read": true, 00:07:11.578 "write": true, 00:07:11.578 "unmap": true, 00:07:11.578 "flush": false, 00:07:11.578 "reset": true, 00:07:11.578 "nvme_admin": false, 00:07:11.578 "nvme_io": false, 00:07:11.578 "nvme_io_md": false, 00:07:11.578 "write_zeroes": true, 00:07:11.578 "zcopy": false, 00:07:11.578 "get_zone_info": false, 00:07:11.578 "zone_management": false, 00:07:11.578 "zone_append": false, 00:07:11.578 "compare": false, 00:07:11.578 "compare_and_write": false, 00:07:11.578 "abort": false, 00:07:11.578 "seek_hole": true, 00:07:11.578 "seek_data": true, 00:07:11.578 "copy": false, 00:07:11.578 "nvme_iov_md": false 00:07:11.578 }, 00:07:11.578 "driver_specific": { 00:07:11.578 "lvol": { 00:07:11.578 "lvol_store_uuid": "86c5485e-86b2-4e70-90c6-0e0acab44c2f", 00:07:11.578 "base_bdev": "aio_bdev", 00:07:11.578 "thin_provision": false, 00:07:11.578 "num_allocated_clusters": 38, 00:07:11.578 "snapshot": false, 00:07:11.578 "clone": false, 00:07:11.578 "esnap_clone": false 00:07:11.578 } 00:07:11.578 } 00:07:11.578 } 00:07:11.578 ] 00:07:11.578 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:11.578 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86c5485e-86b2-4e70-90c6-0e0acab44c2f 00:07:11.579 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:11.838 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:11.838 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86c5485e-86b2-4e70-90c6-0e0acab44c2f 00:07:11.838 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:11.838 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:11.838 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:12.098 [2024-11-18 12:50:09.649876] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:12.098 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86c5485e-86b2-4e70-90c6-0e0acab44c2f 00:07:12.098 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:12.098 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86c5485e-86b2-4e70-90c6-0e0acab44c2f 00:07:12.098 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.098 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.098 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.098 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.098 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.098 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.098 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.098 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:12.098 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86c5485e-86b2-4e70-90c6-0e0acab44c2f 00:07:12.357 request: 00:07:12.357 { 00:07:12.357 "uuid": "86c5485e-86b2-4e70-90c6-0e0acab44c2f", 00:07:12.357 "method": "bdev_lvol_get_lvstores", 00:07:12.357 "req_id": 1 00:07:12.358 } 00:07:12.358 Got JSON-RPC error response 00:07:12.358 response: 00:07:12.358 { 00:07:12.358 "code": -19, 00:07:12.358 "message": "No such device" 00:07:12.358 } 00:07:12.358 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:12.358 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:12.358 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:12.358 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:12.358 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:12.617 aio_bdev 00:07:12.617 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f99aee01-c679-43a0-881e-ce04d4347eea 00:07:12.617 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=f99aee01-c679-43a0-881e-ce04d4347eea 00:07:12.617 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:12.617 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:12.617 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:12.618 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:12.618 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:12.618 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f99aee01-c679-43a0-881e-ce04d4347eea -t 2000 00:07:12.877 [ 00:07:12.877 { 00:07:12.877 "name": "f99aee01-c679-43a0-881e-ce04d4347eea", 00:07:12.877 "aliases": [ 00:07:12.877 "lvs/lvol" 00:07:12.877 ], 00:07:12.877 "product_name": "Logical Volume", 00:07:12.877 "block_size": 4096, 00:07:12.877 "num_blocks": 38912, 00:07:12.877 "uuid": "f99aee01-c679-43a0-881e-ce04d4347eea", 00:07:12.877 "assigned_rate_limits": { 00:07:12.877 "rw_ios_per_sec": 0, 00:07:12.877 "rw_mbytes_per_sec": 0, 00:07:12.877 "r_mbytes_per_sec": 0, 00:07:12.877 "w_mbytes_per_sec": 0 00:07:12.877 }, 00:07:12.877 "claimed": false, 00:07:12.877 "zoned": false, 00:07:12.877 "supported_io_types": { 00:07:12.877 "read": true, 00:07:12.877 "write": true, 00:07:12.877 "unmap": true, 00:07:12.877 "flush": false, 00:07:12.877 "reset": true, 00:07:12.877 "nvme_admin": false, 00:07:12.877 "nvme_io": false, 00:07:12.877 "nvme_io_md": false, 00:07:12.877 "write_zeroes": true, 00:07:12.877 "zcopy": false, 00:07:12.877 "get_zone_info": false, 00:07:12.877 "zone_management": false, 00:07:12.877 "zone_append": false, 00:07:12.877 "compare": false, 00:07:12.877 "compare_and_write": false, 00:07:12.877 "abort": false, 00:07:12.877 "seek_hole": true, 00:07:12.877 "seek_data": true, 00:07:12.877 "copy": false, 00:07:12.877 "nvme_iov_md": false 00:07:12.878 }, 00:07:12.878 "driver_specific": { 00:07:12.878 "lvol": { 00:07:12.878 "lvol_store_uuid": "86c5485e-86b2-4e70-90c6-0e0acab44c2f", 00:07:12.878 "base_bdev": "aio_bdev", 00:07:12.878 "thin_provision": false, 00:07:12.878 "num_allocated_clusters": 38, 00:07:12.878 "snapshot": false, 00:07:12.878 "clone": false, 00:07:12.878 "esnap_clone": false 00:07:12.878 } 00:07:12.878 } 00:07:12.878 } 00:07:12.878 ] 00:07:12.878 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:12.878 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86c5485e-86b2-4e70-90c6-0e0acab44c2f 00:07:12.878 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:13.137 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:13.137 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86c5485e-86b2-4e70-90c6-0e0acab44c2f 00:07:13.137 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:13.397 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:13.397 12:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f99aee01-c679-43a0-881e-ce04d4347eea 00:07:13.397 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86c5485e-86b2-4e70-90c6-0e0acab44c2f 00:07:13.656 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:13.916 00:07:13.916 real 0m17.056s 00:07:13.916 user 0m43.959s 00:07:13.916 sys 0m3.770s 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:13.916 ************************************ 00:07:13.916 END TEST lvs_grow_dirty 00:07:13.916 ************************************ 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:13.916 nvmf_trace.0 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:13.916 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:13.916 rmmod nvme_tcp 00:07:14.175 rmmod nvme_fabrics 00:07:14.175 rmmod nvme_keyring 00:07:14.175 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:14.175 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:14.175 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:14.175 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2180744 ']' 00:07:14.175 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2180744 00:07:14.175 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2180744 ']' 00:07:14.175 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2180744 00:07:14.175 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:07:14.176 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:14.176 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2180744 00:07:14.176 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:14.176 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:14.176 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2180744' 00:07:14.176 killing process with pid 2180744 00:07:14.176 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2180744 00:07:14.176 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2180744 00:07:14.436 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:14.436 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:14.436 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:14.436 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:14.436 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:14.436 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:14.436 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:14.436 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:14.436 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:14.436 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.436 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.436 12:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.344 12:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:16.344 00:07:16.344 real 0m41.973s 00:07:16.344 user 1m4.812s 00:07:16.344 sys 0m10.265s 00:07:16.344 12:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:16.344 12:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:16.344 ************************************ 00:07:16.344 END TEST nvmf_lvs_grow 00:07:16.344 ************************************ 00:07:16.344 12:50:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:16.344 12:50:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:16.344 12:50:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:16.344 12:50:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:16.344 ************************************ 00:07:16.344 START TEST nvmf_bdev_io_wait 00:07:16.344 ************************************ 00:07:16.345 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:16.605 * Looking for test storage... 00:07:16.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:16.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.605 --rc genhtml_branch_coverage=1 00:07:16.605 --rc genhtml_function_coverage=1 00:07:16.605 --rc genhtml_legend=1 00:07:16.605 --rc geninfo_all_blocks=1 00:07:16.605 --rc geninfo_unexecuted_blocks=1 00:07:16.605 00:07:16.605 ' 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:16.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.605 --rc genhtml_branch_coverage=1 00:07:16.605 --rc genhtml_function_coverage=1 00:07:16.605 --rc genhtml_legend=1 00:07:16.605 --rc geninfo_all_blocks=1 00:07:16.605 --rc geninfo_unexecuted_blocks=1 00:07:16.605 00:07:16.605 ' 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:16.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.605 --rc genhtml_branch_coverage=1 00:07:16.605 --rc genhtml_function_coverage=1 00:07:16.605 --rc genhtml_legend=1 00:07:16.605 --rc geninfo_all_blocks=1 00:07:16.605 --rc geninfo_unexecuted_blocks=1 00:07:16.605 00:07:16.605 ' 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:16.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.605 --rc genhtml_branch_coverage=1 00:07:16.605 --rc genhtml_function_coverage=1 00:07:16.605 --rc genhtml_legend=1 00:07:16.605 --rc geninfo_all_blocks=1 00:07:16.605 --rc geninfo_unexecuted_blocks=1 00:07:16.605 00:07:16.605 ' 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.605 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:16.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:16.606 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:23.185 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:23.185 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:23.185 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:23.186 Found net devices under 0000:86:00.0: cvl_0_0 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:23.186 Found net devices under 0000:86:00.1: cvl_0_1 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.186 12:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:23.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:07:23.186 00:07:23.186 --- 10.0.0.2 ping statistics --- 00:07:23.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.186 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:07:23.186 00:07:23.186 --- 10.0.0.1 ping statistics --- 00:07:23.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.186 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2185012 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2185012 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2185012 ']' 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:23.186 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.186 [2024-11-18 12:50:20.305480] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:23.186 [2024-11-18 12:50:20.305535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.186 [2024-11-18 12:50:20.385224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.186 [2024-11-18 12:50:20.430332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.186 [2024-11-18 12:50:20.430376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.186 [2024-11-18 12:50:20.430383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.186 [2024-11-18 12:50:20.430389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.187 [2024-11-18 12:50:20.430394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.187 [2024-11-18 12:50:20.431958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.187 [2024-11-18 12:50:20.432066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.187 [2024-11-18 12:50:20.432209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.187 [2024-11-18 12:50:20.432211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.187 [2024-11-18 12:50:20.563928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.187 Malloc0 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.187 [2024-11-18 12:50:20.611226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2185047 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2185049 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:23.187 { 00:07:23.187 "params": { 00:07:23.187 "name": "Nvme$subsystem", 00:07:23.187 "trtype": "$TEST_TRANSPORT", 00:07:23.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:23.187 "adrfam": "ipv4", 00:07:23.187 "trsvcid": "$NVMF_PORT", 00:07:23.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:23.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:23.187 "hdgst": ${hdgst:-false}, 00:07:23.187 "ddgst": ${ddgst:-false} 00:07:23.187 }, 00:07:23.187 "method": "bdev_nvme_attach_controller" 00:07:23.187 } 00:07:23.187 EOF 00:07:23.187 )") 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2185051 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2185054 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:23.187 { 00:07:23.187 "params": { 00:07:23.187 "name": "Nvme$subsystem", 00:07:23.187 "trtype": "$TEST_TRANSPORT", 00:07:23.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:23.187 "adrfam": "ipv4", 00:07:23.187 "trsvcid": "$NVMF_PORT", 00:07:23.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:23.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:23.187 "hdgst": ${hdgst:-false}, 00:07:23.187 "ddgst": ${ddgst:-false} 00:07:23.187 }, 00:07:23.187 "method": "bdev_nvme_attach_controller" 00:07:23.187 } 00:07:23.187 EOF 00:07:23.187 )") 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:23.187 { 00:07:23.187 "params": { 00:07:23.187 "name": "Nvme$subsystem", 00:07:23.187 "trtype": "$TEST_TRANSPORT", 00:07:23.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:23.187 "adrfam": "ipv4", 00:07:23.187 "trsvcid": "$NVMF_PORT", 00:07:23.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:23.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:23.187 "hdgst": ${hdgst:-false}, 00:07:23.187 "ddgst": ${ddgst:-false} 00:07:23.187 }, 00:07:23.187 "method": "bdev_nvme_attach_controller" 00:07:23.187 } 00:07:23.187 EOF 00:07:23.187 )") 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:23.187 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:23.188 { 00:07:23.188 "params": { 00:07:23.188 "name": "Nvme$subsystem", 00:07:23.188 "trtype": "$TEST_TRANSPORT", 00:07:23.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:23.188 "adrfam": "ipv4", 00:07:23.188 "trsvcid": "$NVMF_PORT", 00:07:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:23.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:23.188 "hdgst": ${hdgst:-false}, 00:07:23.188 "ddgst": ${ddgst:-false} 00:07:23.188 }, 00:07:23.188 "method": "bdev_nvme_attach_controller" 00:07:23.188 } 00:07:23.188 EOF 00:07:23.188 )") 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2185047 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:23.188 "params": { 00:07:23.188 "name": "Nvme1", 00:07:23.188 "trtype": "tcp", 00:07:23.188 "traddr": "10.0.0.2", 00:07:23.188 "adrfam": "ipv4", 00:07:23.188 "trsvcid": "4420", 00:07:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:23.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:23.188 "hdgst": false, 00:07:23.188 "ddgst": false 00:07:23.188 }, 00:07:23.188 "method": "bdev_nvme_attach_controller" 00:07:23.188 }' 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:23.188 "params": { 00:07:23.188 "name": "Nvme1", 00:07:23.188 "trtype": "tcp", 00:07:23.188 "traddr": "10.0.0.2", 00:07:23.188 "adrfam": "ipv4", 00:07:23.188 "trsvcid": "4420", 00:07:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:23.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:23.188 "hdgst": false, 00:07:23.188 "ddgst": false 00:07:23.188 }, 00:07:23.188 "method": "bdev_nvme_attach_controller" 00:07:23.188 }' 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:23.188 "params": { 00:07:23.188 "name": "Nvme1", 00:07:23.188 "trtype": "tcp", 00:07:23.188 "traddr": "10.0.0.2", 00:07:23.188 "adrfam": "ipv4", 00:07:23.188 "trsvcid": "4420", 00:07:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:23.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:23.188 "hdgst": false, 00:07:23.188 "ddgst": false 00:07:23.188 }, 00:07:23.188 "method": "bdev_nvme_attach_controller" 00:07:23.188 }' 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:23.188 12:50:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:23.188 "params": { 00:07:23.188 "name": "Nvme1", 00:07:23.188 "trtype": "tcp", 00:07:23.188 "traddr": "10.0.0.2", 00:07:23.188 "adrfam": "ipv4", 00:07:23.188 "trsvcid": "4420", 00:07:23.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:23.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:23.188 "hdgst": false, 00:07:23.188 "ddgst": false 00:07:23.188 }, 00:07:23.188 "method": "bdev_nvme_attach_controller" 00:07:23.188 }' 00:07:23.188 [2024-11-18 12:50:20.660274] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:23.188 [2024-11-18 12:50:20.660324] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:23.188 [2024-11-18 12:50:20.663636] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:23.188 [2024-11-18 12:50:20.663681] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:23.188 [2024-11-18 12:50:20.665723] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:23.188 [2024-11-18 12:50:20.665768] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:23.188 [2024-11-18 12:50:20.666834] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:23.188 [2024-11-18 12:50:20.666875] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:23.188 [2024-11-18 12:50:20.845115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.449 [2024-11-18 12:50:20.888182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:23.449 [2024-11-18 12:50:20.939836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.449 [2024-11-18 12:50:20.982995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:23.449 [2024-11-18 12:50:21.040680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.449 [2024-11-18 12:50:21.085967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.449 [2024-11-18 12:50:21.094739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:23.449 [2024-11-18 12:50:21.128812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:23.709 Running I/O for 1 seconds... 00:07:23.709 Running I/O for 1 seconds... 00:07:23.709 Running I/O for 1 seconds... 00:07:23.709 Running I/O for 1 seconds... 00:07:24.650 11647.00 IOPS, 45.50 MiB/s 00:07:24.650 Latency(us) 00:07:24.650 [2024-11-18T11:50:22.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.650 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:24.650 Nvme1n1 : 1.01 11708.13 45.73 0.00 0.00 10895.75 4929.45 16070.57 00:07:24.650 [2024-11-18T11:50:22.352Z] =================================================================================================================== 00:07:24.650 [2024-11-18T11:50:22.352Z] Total : 11708.13 45.73 0.00 0.00 10895.75 4929.45 16070.57 00:07:24.650 246584.00 IOPS, 963.22 MiB/s 00:07:24.650 Latency(us) 00:07:24.650 [2024-11-18T11:50:22.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.650 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:24.650 Nvme1n1 : 1.00 246193.66 961.69 0.00 0.00 517.04 233.29 1560.04 00:07:24.650 [2024-11-18T11:50:22.352Z] =================================================================================================================== 00:07:24.650 [2024-11-18T11:50:22.352Z] Total : 246193.66 961.69 0.00 0.00 517.04 233.29 1560.04 00:07:24.650 11418.00 IOPS, 44.60 MiB/s 00:07:24.650 Latency(us) 00:07:24.650 [2024-11-18T11:50:22.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.650 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:24.650 Nvme1n1 : 1.01 11475.97 44.83 0.00 0.00 11117.13 5157.40 19603.81 00:07:24.650 [2024-11-18T11:50:22.352Z] =================================================================================================================== 00:07:24.650 [2024-11-18T11:50:22.352Z] Total : 11475.97 44.83 0.00 0.00 11117.13 5157.40 19603.81 00:07:24.650 9768.00 IOPS, 38.16 MiB/s 00:07:24.650 Latency(us) 00:07:24.650 [2024-11-18T11:50:22.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.650 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:24.650 Nvme1n1 : 1.01 9852.51 38.49 0.00 0.00 12956.56 3960.65 25758.50 00:07:24.650 [2024-11-18T11:50:22.352Z] =================================================================================================================== 00:07:24.650 [2024-11-18T11:50:22.352Z] Total : 9852.51 38.49 0.00 0.00 12956.56 3960.65 25758.50 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2185049 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2185051 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2185054 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:24.911 rmmod nvme_tcp 00:07:24.911 rmmod nvme_fabrics 00:07:24.911 rmmod nvme_keyring 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:24.911 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2185012 ']' 00:07:24.912 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2185012 00:07:24.912 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2185012 ']' 00:07:24.912 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2185012 00:07:24.912 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:07:24.912 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:24.912 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2185012 00:07:24.912 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:24.912 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:24.912 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2185012' 00:07:24.912 killing process with pid 2185012 00:07:24.912 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2185012 00:07:24.912 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2185012 00:07:25.172 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:25.172 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:25.172 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:25.172 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:25.172 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:25.172 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:25.172 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:25.172 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:25.172 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:25.172 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.172 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.172 12:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.714 12:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:27.714 00:07:27.714 real 0m10.779s 00:07:27.714 user 0m15.784s 00:07:27.714 sys 0m6.293s 00:07:27.714 12:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:27.714 12:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.714 ************************************ 00:07:27.714 END TEST nvmf_bdev_io_wait 00:07:27.714 ************************************ 00:07:27.714 12:50:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:27.714 12:50:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:27.714 12:50:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:27.714 12:50:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:27.714 ************************************ 00:07:27.714 START TEST nvmf_queue_depth 00:07:27.714 ************************************ 00:07:27.714 12:50:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:27.714 * Looking for test storage... 00:07:27.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.714 12:50:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:27.714 12:50:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:07:27.714 12:50:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:27.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.714 --rc genhtml_branch_coverage=1 00:07:27.714 --rc genhtml_function_coverage=1 00:07:27.714 --rc genhtml_legend=1 00:07:27.714 --rc geninfo_all_blocks=1 00:07:27.714 --rc geninfo_unexecuted_blocks=1 00:07:27.714 00:07:27.714 ' 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:27.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.714 --rc genhtml_branch_coverage=1 00:07:27.714 --rc genhtml_function_coverage=1 00:07:27.714 --rc genhtml_legend=1 00:07:27.714 --rc geninfo_all_blocks=1 00:07:27.714 --rc geninfo_unexecuted_blocks=1 00:07:27.714 00:07:27.714 ' 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:27.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.714 --rc genhtml_branch_coverage=1 00:07:27.714 --rc genhtml_function_coverage=1 00:07:27.714 --rc genhtml_legend=1 00:07:27.714 --rc geninfo_all_blocks=1 00:07:27.714 --rc geninfo_unexecuted_blocks=1 00:07:27.714 00:07:27.714 ' 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:27.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.714 --rc genhtml_branch_coverage=1 00:07:27.714 --rc genhtml_function_coverage=1 00:07:27.714 --rc genhtml_legend=1 00:07:27.714 --rc geninfo_all_blocks=1 00:07:27.714 --rc geninfo_unexecuted_blocks=1 00:07:27.714 00:07:27.714 ' 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.714 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:27.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:27.715 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:34.297 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:34.297 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:34.297 Found net devices under 0000:86:00.0: cvl_0_0 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.297 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:34.298 Found net devices under 0000:86:00.1: cvl_0_1 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:34.298 12:50:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:34.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:07:34.298 00:07:34.298 --- 10.0.0.2 ping statistics --- 00:07:34.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.298 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:07:34.298 00:07:34.298 --- 10.0.0.1 ping statistics --- 00:07:34.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.298 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2189054 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2189054 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2189054 ']' 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.298 [2024-11-18 12:50:31.157261] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:34.298 [2024-11-18 12:50:31.157304] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.298 [2024-11-18 12:50:31.237868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.298 [2024-11-18 12:50:31.278797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.298 [2024-11-18 12:50:31.278833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.298 [2024-11-18 12:50:31.278841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.298 [2024-11-18 12:50:31.278848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.298 [2024-11-18 12:50:31.278853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.298 [2024-11-18 12:50:31.279446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.298 [2024-11-18 12:50:31.410768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.298 Malloc0 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.298 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.299 [2024-11-18 12:50:31.461078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2189079 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2189079 /var/tmp/bdevperf.sock 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2189079 ']' 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:34.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.299 [2024-11-18 12:50:31.513316] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:34.299 [2024-11-18 12:50:31.513371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189079 ] 00:07:34.299 [2024-11-18 12:50:31.589529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.299 [2024-11-18 12:50:31.630837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.299 NVMe0n1 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.299 12:50:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:34.299 Running I/O for 10 seconds... 00:07:36.621 11535.00 IOPS, 45.06 MiB/s [2024-11-18T11:50:35.262Z] 11785.00 IOPS, 46.04 MiB/s [2024-11-18T11:50:36.199Z] 11942.33 IOPS, 46.65 MiB/s [2024-11-18T11:50:37.137Z] 12031.50 IOPS, 47.00 MiB/s [2024-11-18T11:50:38.076Z] 12111.00 IOPS, 47.31 MiB/s [2024-11-18T11:50:39.015Z] 12150.00 IOPS, 47.46 MiB/s [2024-11-18T11:50:39.969Z] 12182.29 IOPS, 47.59 MiB/s [2024-11-18T11:50:41.350Z] 12208.50 IOPS, 47.69 MiB/s [2024-11-18T11:50:42.290Z] 12182.33 IOPS, 47.59 MiB/s [2024-11-18T11:50:42.290Z] 12176.00 IOPS, 47.56 MiB/s 00:07:44.588 Latency(us) 00:07:44.588 [2024-11-18T11:50:42.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.588 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:44.588 Verification LBA range: start 0x0 length 0x4000 00:07:44.588 NVMe0n1 : 10.05 12210.31 47.70 0.00 0.00 83587.71 11283.59 52428.80 00:07:44.588 [2024-11-18T11:50:42.290Z] =================================================================================================================== 00:07:44.588 [2024-11-18T11:50:42.290Z] Total : 12210.31 47.70 0.00 0.00 83587.71 11283.59 52428.80 00:07:44.588 { 00:07:44.588 "results": [ 00:07:44.588 { 00:07:44.588 "job": "NVMe0n1", 00:07:44.588 "core_mask": "0x1", 00:07:44.588 "workload": "verify", 00:07:44.588 "status": "finished", 00:07:44.588 "verify_range": { 00:07:44.588 "start": 0, 00:07:44.588 "length": 16384 00:07:44.588 }, 00:07:44.588 "queue_depth": 1024, 00:07:44.588 "io_size": 4096, 00:07:44.588 "runtime": 10.051752, 00:07:44.588 "iops": 12210.309207787855, 00:07:44.588 "mibps": 47.69652034292131, 00:07:44.588 "io_failed": 0, 00:07:44.588 "io_timeout": 0, 00:07:44.588 "avg_latency_us": 83587.71099315067, 00:07:44.588 "min_latency_us": 11283.589565217391, 00:07:44.588 "max_latency_us": 52428.8 00:07:44.588 } 00:07:44.588 ], 00:07:44.588 "core_count": 1 00:07:44.588 } 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2189079 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2189079 ']' 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2189079 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2189079 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2189079' 00:07:44.588 killing process with pid 2189079 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2189079 00:07:44.588 Received shutdown signal, test time was about 10.000000 seconds 00:07:44.588 00:07:44.588 Latency(us) 00:07:44.588 [2024-11-18T11:50:42.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.588 [2024-11-18T11:50:42.290Z] =================================================================================================================== 00:07:44.588 [2024-11-18T11:50:42.290Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2189079 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:44.588 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:44.588 rmmod nvme_tcp 00:07:44.588 rmmod nvme_fabrics 00:07:44.588 rmmod nvme_keyring 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2189054 ']' 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2189054 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2189054 ']' 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2189054 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2189054 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2189054' 00:07:44.849 killing process with pid 2189054 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2189054 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2189054 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.849 12:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.404 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:47.404 00:07:47.404 real 0m19.722s 00:07:47.404 user 0m23.010s 00:07:47.404 sys 0m6.093s 00:07:47.404 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.404 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.404 ************************************ 00:07:47.404 END TEST nvmf_queue_depth 00:07:47.404 ************************************ 00:07:47.404 12:50:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:47.404 12:50:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:47.405 ************************************ 00:07:47.405 START TEST nvmf_target_multipath 00:07:47.405 ************************************ 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:47.405 * Looking for test storage... 00:07:47.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:47.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.405 --rc genhtml_branch_coverage=1 00:07:47.405 --rc genhtml_function_coverage=1 00:07:47.405 --rc genhtml_legend=1 00:07:47.405 --rc geninfo_all_blocks=1 00:07:47.405 --rc geninfo_unexecuted_blocks=1 00:07:47.405 00:07:47.405 ' 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:47.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.405 --rc genhtml_branch_coverage=1 00:07:47.405 --rc genhtml_function_coverage=1 00:07:47.405 --rc genhtml_legend=1 00:07:47.405 --rc geninfo_all_blocks=1 00:07:47.405 --rc geninfo_unexecuted_blocks=1 00:07:47.405 00:07:47.405 ' 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:47.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.405 --rc genhtml_branch_coverage=1 00:07:47.405 --rc genhtml_function_coverage=1 00:07:47.405 --rc genhtml_legend=1 00:07:47.405 --rc geninfo_all_blocks=1 00:07:47.405 --rc geninfo_unexecuted_blocks=1 00:07:47.405 00:07:47.405 ' 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:47.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.405 --rc genhtml_branch_coverage=1 00:07:47.405 --rc genhtml_function_coverage=1 00:07:47.405 --rc genhtml_legend=1 00:07:47.405 --rc geninfo_all_blocks=1 00:07:47.405 --rc geninfo_unexecuted_blocks=1 00:07:47.405 00:07:47.405 ' 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:47.405 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:47.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:47.406 12:50:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:53.986 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:53.986 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.986 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:53.987 Found net devices under 0000:86:00.0: cvl_0_0 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:53.987 Found net devices under 0000:86:00.1: cvl_0_1 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:53.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:07:53.987 00:07:53.987 --- 10.0.0.2 ping statistics --- 00:07:53.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.987 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:07:53.987 00:07:53.987 --- 10.0.0.1 ping statistics --- 00:07:53.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.987 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:53.987 only one NIC for nvmf test 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:53.987 12:50:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:53.987 rmmod nvme_tcp 00:07:53.987 rmmod nvme_fabrics 00:07:53.987 rmmod nvme_keyring 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.987 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.988 12:50:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.899 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:55.899 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:55.899 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:55.899 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:55.899 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:55.899 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:55.899 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:55.899 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.899 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.899 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.899 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:55.899 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:55.899 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:55.900 00:07:55.900 real 0m8.455s 00:07:55.900 user 0m1.832s 00:07:55.900 sys 0m4.627s 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:55.900 ************************************ 00:07:55.900 END TEST nvmf_target_multipath 00:07:55.900 ************************************ 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:55.900 ************************************ 00:07:55.900 START TEST nvmf_zcopy 00:07:55.900 ************************************ 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:55.900 * Looking for test storage... 00:07:55.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:55.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.900 --rc genhtml_branch_coverage=1 00:07:55.900 --rc genhtml_function_coverage=1 00:07:55.900 --rc genhtml_legend=1 00:07:55.900 --rc geninfo_all_blocks=1 00:07:55.900 --rc geninfo_unexecuted_blocks=1 00:07:55.900 00:07:55.900 ' 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:55.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.900 --rc genhtml_branch_coverage=1 00:07:55.900 --rc genhtml_function_coverage=1 00:07:55.900 --rc genhtml_legend=1 00:07:55.900 --rc geninfo_all_blocks=1 00:07:55.900 --rc geninfo_unexecuted_blocks=1 00:07:55.900 00:07:55.900 ' 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:55.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.900 --rc genhtml_branch_coverage=1 00:07:55.900 --rc genhtml_function_coverage=1 00:07:55.900 --rc genhtml_legend=1 00:07:55.900 --rc geninfo_all_blocks=1 00:07:55.900 --rc geninfo_unexecuted_blocks=1 00:07:55.900 00:07:55.900 ' 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:55.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.900 --rc genhtml_branch_coverage=1 00:07:55.900 --rc genhtml_function_coverage=1 00:07:55.900 --rc genhtml_legend=1 00:07:55.900 --rc geninfo_all_blocks=1 00:07:55.900 --rc geninfo_unexecuted_blocks=1 00:07:55.900 00:07:55.900 ' 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.900 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:55.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:07:55.901 12:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:02.480 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:02.481 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:02.481 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:02.481 Found net devices under 0000:86:00.0: cvl_0_0 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:02.481 Found net devices under 0000:86:00.1: cvl_0_1 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:02.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:08:02.481 00:08:02.481 --- 10.0.0.2 ping statistics --- 00:08:02.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.481 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:08:02.481 00:08:02.481 --- 10.0.0.1 ping statistics --- 00:08:02.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.481 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2197985 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2197985 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2197985 ']' 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:02.481 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.481 [2024-11-18 12:50:59.487736] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:08:02.481 [2024-11-18 12:50:59.487785] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.481 [2024-11-18 12:50:59.566590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.481 [2024-11-18 12:50:59.606093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.481 [2024-11-18 12:50:59.606130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.481 [2024-11-18 12:50:59.606138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.481 [2024-11-18 12:50:59.606143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.481 [2024-11-18 12:50:59.606149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.481 [2024-11-18 12:50:59.606733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.482 [2024-11-18 12:50:59.753617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.482 [2024-11-18 12:50:59.777851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.482 malloc0 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:02.482 { 00:08:02.482 "params": { 00:08:02.482 "name": "Nvme$subsystem", 00:08:02.482 "trtype": "$TEST_TRANSPORT", 00:08:02.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.482 "adrfam": "ipv4", 00:08:02.482 "trsvcid": "$NVMF_PORT", 00:08:02.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.482 "hdgst": ${hdgst:-false}, 00:08:02.482 "ddgst": ${ddgst:-false} 00:08:02.482 }, 00:08:02.482 "method": "bdev_nvme_attach_controller" 00:08:02.482 } 00:08:02.482 EOF 00:08:02.482 )") 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:02.482 12:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:02.482 "params": { 00:08:02.482 "name": "Nvme1", 00:08:02.482 "trtype": "tcp", 00:08:02.482 "traddr": "10.0.0.2", 00:08:02.482 "adrfam": "ipv4", 00:08:02.482 "trsvcid": "4420", 00:08:02.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:02.482 "hdgst": false, 00:08:02.482 "ddgst": false 00:08:02.482 }, 00:08:02.482 "method": "bdev_nvme_attach_controller" 00:08:02.482 }' 00:08:02.482 [2024-11-18 12:50:59.864940] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:08:02.482 [2024-11-18 12:50:59.864981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198005 ] 00:08:02.482 [2024-11-18 12:50:59.936562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.482 [2024-11-18 12:50:59.977925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.743 Running I/O for 10 seconds... 00:08:04.646 8370.00 IOPS, 65.39 MiB/s [2024-11-18T11:51:03.729Z] 8455.00 IOPS, 66.05 MiB/s [2024-11-18T11:51:04.667Z] 8497.67 IOPS, 66.39 MiB/s [2024-11-18T11:51:05.606Z] 8511.75 IOPS, 66.50 MiB/s [2024-11-18T11:51:06.546Z] 8518.60 IOPS, 66.55 MiB/s [2024-11-18T11:51:07.485Z] 8526.33 IOPS, 66.61 MiB/s [2024-11-18T11:51:08.425Z] 8529.00 IOPS, 66.63 MiB/s [2024-11-18T11:51:09.363Z] 8532.25 IOPS, 66.66 MiB/s [2024-11-18T11:51:10.744Z] 8535.22 IOPS, 66.68 MiB/s [2024-11-18T11:51:10.744Z] 8536.30 IOPS, 66.69 MiB/s 00:08:13.042 Latency(us) 00:08:13.042 [2024-11-18T11:51:10.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.042 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:13.042 Verification LBA range: start 0x0 length 0x1000 00:08:13.042 Nvme1n1 : 10.01 8537.71 66.70 0.00 0.00 14949.78 2621.44 22225.25 00:08:13.042 [2024-11-18T11:51:10.744Z] =================================================================================================================== 00:08:13.042 [2024-11-18T11:51:10.744Z] Total : 8537.71 66.70 0.00 0.00 14949.78 2621.44 22225.25 00:08:13.042 12:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2200353 00:08:13.042 12:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:13.042 12:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.042 12:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:13.042 12:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:13.042 12:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:13.042 12:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:13.042 12:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:13.042 12:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:13.042 { 00:08:13.042 "params": { 00:08:13.042 "name": "Nvme$subsystem", 00:08:13.042 "trtype": "$TEST_TRANSPORT", 00:08:13.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.042 "adrfam": "ipv4", 00:08:13.042 "trsvcid": "$NVMF_PORT", 00:08:13.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.042 "hdgst": ${hdgst:-false}, 00:08:13.042 "ddgst": ${ddgst:-false} 00:08:13.042 }, 00:08:13.042 "method": "bdev_nvme_attach_controller" 00:08:13.042 } 00:08:13.042 EOF 00:08:13.042 )") 00:08:13.042 [2024-11-18 12:51:10.505639] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.042 [2024-11-18 12:51:10.505674] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.042 12:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:13.042 12:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:13.042 12:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:13.042 12:51:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:13.042 "params": { 00:08:13.042 "name": "Nvme1", 00:08:13.042 "trtype": "tcp", 00:08:13.042 "traddr": "10.0.0.2", 00:08:13.042 "adrfam": "ipv4", 00:08:13.042 "trsvcid": "4420", 00:08:13.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:13.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:13.042 "hdgst": false, 00:08:13.042 "ddgst": false 00:08:13.042 }, 00:08:13.042 "method": "bdev_nvme_attach_controller" 00:08:13.043 }' 00:08:13.043 [2024-11-18 12:51:10.517643] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.517659] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.529660] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.529670] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.541693] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.541703] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.547374] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:08:13.043 [2024-11-18 12:51:10.547416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2200353 ] 00:08:13.043 [2024-11-18 12:51:10.553720] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.553730] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.565753] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.565763] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.577787] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.577796] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.589819] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.589829] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.601859] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.601873] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.613882] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.613893] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.622121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.043 [2024-11-18 12:51:10.625916] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.625928] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.637951] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.637966] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.650007] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.650020] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.662014] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.662024] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.664953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.043 [2024-11-18 12:51:10.674047] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.674059] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.686086] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.686108] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.698113] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.698130] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.710142] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.710157] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.722178] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.722191] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.043 [2024-11-18 12:51:10.734206] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.043 [2024-11-18 12:51:10.734219] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.303 [2024-11-18 12:51:10.746239] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.303 [2024-11-18 12:51:10.746251] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.303 [2024-11-18 12:51:10.758290] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.303 [2024-11-18 12:51:10.758310] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.303 [2024-11-18 12:51:10.770308] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.303 [2024-11-18 12:51:10.770323] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.303 [2024-11-18 12:51:10.782342] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.303 [2024-11-18 12:51:10.782361] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.303 [2024-11-18 12:51:10.794381] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.303 [2024-11-18 12:51:10.794395] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.303 [2024-11-18 12:51:10.806411] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.303 [2024-11-18 12:51:10.806423] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.303 [2024-11-18 12:51:10.818437] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.303 [2024-11-18 12:51:10.818447] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.303 [2024-11-18 12:51:10.830469] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.303 [2024-11-18 12:51:10.830477] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.303 [2024-11-18 12:51:10.842507] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.303 [2024-11-18 12:51:10.842520] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.303 [2024-11-18 12:51:10.854551] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.303 [2024-11-18 12:51:10.854562] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.303 [2024-11-18 12:51:10.866565] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.303 [2024-11-18 12:51:10.866574] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.303 [2024-11-18 12:51:10.878598] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.303 [2024-11-18 12:51:10.878618] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.303 [2024-11-18 12:51:10.890644] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.303 [2024-11-18 12:51:10.890657] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.304 [2024-11-18 12:51:10.902666] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.304 [2024-11-18 12:51:10.902680] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.304 [2024-11-18 12:51:10.914697] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.304 [2024-11-18 12:51:10.914705] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.304 [2024-11-18 12:51:10.926730] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.304 [2024-11-18 12:51:10.926741] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.304 [2024-11-18 12:51:10.938770] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.304 [2024-11-18 12:51:10.938788] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.304 [2024-11-18 12:51:10.946785] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.304 [2024-11-18 12:51:10.946796] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.304 Running I/O for 5 seconds... 00:08:13.304 [2024-11-18 12:51:10.962126] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.304 [2024-11-18 12:51:10.962145] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.304 [2024-11-18 12:51:10.976433] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.304 [2024-11-18 12:51:10.976452] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.304 [2024-11-18 12:51:10.985551] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.304 [2024-11-18 12:51:10.985570] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.304 [2024-11-18 12:51:10.995029] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.304 [2024-11-18 12:51:10.995047] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.003887] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.003906] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.013281] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.013299] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.028246] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.028264] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.039034] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.039052] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.048547] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.048564] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.057494] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.057523] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.066746] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.066764] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.081451] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.081470] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.090557] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.090575] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.100105] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.100122] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.108911] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.108933] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.118186] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.118203] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.132740] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.132757] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.146579] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.146598] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.157062] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.157080] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.165803] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.165820] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.175202] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.175221] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.189498] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.189516] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.197514] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.197532] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.206449] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.206467] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.216255] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.216273] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.225632] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.225650] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.239975] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.239994] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.253559] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.253578] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.564 [2024-11-18 12:51:11.262671] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.564 [2024-11-18 12:51:11.262690] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.271635] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.271655] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.281059] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.281076] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.296244] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.296263] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.307262] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.307281] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.316376] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.316399] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.325723] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.325743] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.335065] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.335084] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.350158] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.350177] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.361381] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.361400] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.375616] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.375645] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.384520] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.384539] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.398777] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.398797] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.412498] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.412517] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.421559] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.421578] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.430950] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.430968] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.440220] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.440238] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.448932] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.448950] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.463582] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.463612] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.477297] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.477316] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.486332] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.486356] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.495813] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.495831] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.504921] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.504940] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.825 [2024-11-18 12:51:11.519545] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:13.825 [2024-11-18 12:51:11.519564] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.085 [2024-11-18 12:51:11.530784] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.085 [2024-11-18 12:51:11.530804] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.085 [2024-11-18 12:51:11.539466] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.085 [2024-11-18 12:51:11.539484] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.085 [2024-11-18 12:51:11.548282] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.085 [2024-11-18 12:51:11.548300] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.085 [2024-11-18 12:51:11.563259] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.085 [2024-11-18 12:51:11.563277] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.578404] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.578423] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.587537] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.587555] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.596866] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.596884] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.606165] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.606183] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.614899] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.614917] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.629424] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.629443] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.643419] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.643439] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.652272] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.652290] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.661002] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.661020] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.670495] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.670514] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.684983] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.685002] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.698007] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.698025] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.712438] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.712457] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.726220] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.726239] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.735079] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.735097] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.749461] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.749480] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.762736] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.762754] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.771904] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.771922] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.086 [2024-11-18 12:51:11.781384] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.086 [2024-11-18 12:51:11.781402] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.346 [2024-11-18 12:51:11.790322] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.346 [2024-11-18 12:51:11.790341] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.346 [2024-11-18 12:51:11.804773] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.346 [2024-11-18 12:51:11.804791] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.346 [2024-11-18 12:51:11.818106] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.346 [2024-11-18 12:51:11.818124] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.346 [2024-11-18 12:51:11.827750] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.346 [2024-11-18 12:51:11.827768] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.346 [2024-11-18 12:51:11.836576] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.346 [2024-11-18 12:51:11.836595] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.346 [2024-11-18 12:51:11.845968] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.346 [2024-11-18 12:51:11.845986] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:11.860571] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:11.860590] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:11.874160] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:11.874178] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:11.883209] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:11.883228] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:11.892497] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:11.892515] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:11.901934] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:11.901953] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:11.916713] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:11.916732] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:11.930537] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:11.930555] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:11.944576] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:11.944595] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:11.953623] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:11.953641] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 16407.00 IOPS, 128.18 MiB/s [2024-11-18T11:51:12.049Z] [2024-11-18 12:51:11.962315] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:11.962333] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:11.976723] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:11.976742] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:11.990256] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:11.990275] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:11.999302] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:11.999321] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:12.008519] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:12.008536] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:12.017927] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:12.017945] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.347 [2024-11-18 12:51:12.032658] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.347 [2024-11-18 12:51:12.032676] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.046522] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.046542] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.055543] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.055561] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.064820] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.064838] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.073733] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.073750] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.083141] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.083159] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.097602] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.097620] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.111790] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.111809] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.122776] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.122795] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.132131] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.132150] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.141651] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.141670] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.155874] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.155893] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.164828] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.164851] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.174295] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.174314] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.183112] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.183129] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.198029] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.198047] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.209044] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.209062] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.218427] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.607 [2024-11-18 12:51:12.218446] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.607 [2024-11-18 12:51:12.227287] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.608 [2024-11-18 12:51:12.227305] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.608 [2024-11-18 12:51:12.236520] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.608 [2024-11-18 12:51:12.236538] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.608 [2024-11-18 12:51:12.250942] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.608 [2024-11-18 12:51:12.250960] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.608 [2024-11-18 12:51:12.264837] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.608 [2024-11-18 12:51:12.264856] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.608 [2024-11-18 12:51:12.273992] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.608 [2024-11-18 12:51:12.274010] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.608 [2024-11-18 12:51:12.283290] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.608 [2024-11-18 12:51:12.283308] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.608 [2024-11-18 12:51:12.292549] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.608 [2024-11-18 12:51:12.292567] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.307141] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.307160] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.320075] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.320095] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.334536] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.334555] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.343445] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.343464] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.352043] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.352062] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.366627] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.366645] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.375660] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.375683] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.390334] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.390359] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.399370] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.399387] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.408864] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.408883] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.418339] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.418363] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.427777] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.427796] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.437115] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.437134] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.446941] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.446959] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.456424] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.456442] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.471266] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.471285] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.481902] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.481920] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.491180] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.491199] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.500500] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.500518] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.509864] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.509881] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.519334] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.519362] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.528257] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.528275] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.535244] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.535263] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.546274] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.546293] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.868 [2024-11-18 12:51:12.560479] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.868 [2024-11-18 12:51:12.560498] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.574233] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.574257] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.583847] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.583865] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.598395] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.598413] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.607349] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.607372] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.616689] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.616708] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.631381] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.631400] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.640455] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.640474] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.649562] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.649580] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.658870] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.658889] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.668069] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.668089] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.682440] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.682460] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.691440] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.691459] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.700707] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.700726] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.709924] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.709943] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.719000] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.719019] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.733895] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.733915] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.747724] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.747743] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.761653] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.761672] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.771477] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.771496] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.780426] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.129 [2024-11-18 12:51:12.780449] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.129 [2024-11-18 12:51:12.795472] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.130 [2024-11-18 12:51:12.795491] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.130 [2024-11-18 12:51:12.811410] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.130 [2024-11-18 12:51:12.811429] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.130 [2024-11-18 12:51:12.820554] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.130 [2024-11-18 12:51:12.820573] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:12.829916] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:12.829935] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:12.839334] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:12.839358] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:12.854487] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:12.854506] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:12.869618] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:12.869638] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:12.879103] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:12.879123] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:12.893244] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:12.893263] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:12.902209] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:12.902228] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:12.916830] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:12.916850] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:12.931078] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:12.931098] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:12.946216] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:12.946235] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:12.955392] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:12.955410] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 16488.00 IOPS, 128.81 MiB/s [2024-11-18T11:51:13.092Z] [2024-11-18 12:51:12.970103] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:12.970122] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:12.983865] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:12.983884] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:12.992914] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:12.992932] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:13.001623] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:13.001641] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:13.016267] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:13.016285] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:13.025236] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:13.025255] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:13.039871] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:13.039890] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:13.053446] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:13.053465] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:13.068412] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:13.068431] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.390 [2024-11-18 12:51:13.079488] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.390 [2024-11-18 12:51:13.079506] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.650 [2024-11-18 12:51:13.089111] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.650 [2024-11-18 12:51:13.089130] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.650 [2024-11-18 12:51:13.103600] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.650 [2024-11-18 12:51:13.103619] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.112682] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.112700] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.127252] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.127270] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.136228] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.136247] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.145626] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.145646] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.160696] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.160714] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.176068] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.176086] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.190105] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.190124] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.199222] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.199241] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.208095] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.208113] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.222656] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.222674] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.236663] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.236682] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.247297] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.247315] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.261851] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.261870] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.270767] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.270785] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.285705] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.285723] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.296854] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.296874] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.306329] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.306348] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.315854] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.315872] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.325329] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.325347] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.340155] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.340173] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.651 [2024-11-18 12:51:13.349463] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.651 [2024-11-18 12:51:13.349482] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.911 [2024-11-18 12:51:13.364043] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.911 [2024-11-18 12:51:13.364063] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.911 [2024-11-18 12:51:13.373190] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.911 [2024-11-18 12:51:13.373208] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.911 [2024-11-18 12:51:13.382948] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.911 [2024-11-18 12:51:13.382967] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.911 [2024-11-18 12:51:13.397769] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.911 [2024-11-18 12:51:13.397788] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.911 [2024-11-18 12:51:13.408906] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.911 [2024-11-18 12:51:13.408925] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.911 [2024-11-18 12:51:13.418570] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.911 [2024-11-18 12:51:13.418589] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.911 [2024-11-18 12:51:13.427413] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.911 [2024-11-18 12:51:13.427432] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.911 [2024-11-18 12:51:13.436812] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.911 [2024-11-18 12:51:13.436830] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.911 [2024-11-18 12:51:13.451835] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.911 [2024-11-18 12:51:13.451858] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.911 [2024-11-18 12:51:13.462853] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.911 [2024-11-18 12:51:13.462872] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.911 [2024-11-18 12:51:13.477314] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.911 [2024-11-18 12:51:13.477332] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.911 [2024-11-18 12:51:13.491445] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.911 [2024-11-18 12:51:13.491462] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.911 [2024-11-18 12:51:13.502501] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.911 [2024-11-18 12:51:13.502519] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.912 [2024-11-18 12:51:13.516951] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.912 [2024-11-18 12:51:13.516968] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.912 [2024-11-18 12:51:13.530055] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.912 [2024-11-18 12:51:13.530073] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.912 [2024-11-18 12:51:13.544469] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.912 [2024-11-18 12:51:13.544487] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.912 [2024-11-18 12:51:13.558525] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.912 [2024-11-18 12:51:13.558554] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.912 [2024-11-18 12:51:13.572756] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.912 [2024-11-18 12:51:13.572775] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.912 [2024-11-18 12:51:13.588734] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.912 [2024-11-18 12:51:13.588753] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.912 [2024-11-18 12:51:13.597796] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.912 [2024-11-18 12:51:13.597815] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.912 [2024-11-18 12:51:13.607451] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.912 [2024-11-18 12:51:13.607470] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.171 [2024-11-18 12:51:13.616962] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.171 [2024-11-18 12:51:13.616981] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.171 [2024-11-18 12:51:13.631278] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.171 [2024-11-18 12:51:13.631296] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.171 [2024-11-18 12:51:13.644753] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.171 [2024-11-18 12:51:13.644771] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.171 [2024-11-18 12:51:13.658921] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.171 [2024-11-18 12:51:13.658939] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.171 [2024-11-18 12:51:13.668000] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.171 [2024-11-18 12:51:13.668019] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.171 [2024-11-18 12:51:13.677594] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.171 [2024-11-18 12:51:13.677612] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.171 [2024-11-18 12:51:13.686923] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.171 [2024-11-18 12:51:13.686947] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.171 [2024-11-18 12:51:13.701594] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.171 [2024-11-18 12:51:13.701613] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.171 [2024-11-18 12:51:13.710732] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.171 [2024-11-18 12:51:13.710751] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.719653] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.719671] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.728477] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.728495] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.737791] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.737809] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.747261] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.747278] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.756753] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.756770] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.766246] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.766264] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.775823] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.775842] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.785394] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.785415] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.800342] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.800366] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.809532] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.809550] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.818649] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.818667] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.828002] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.828020] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.837491] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.837509] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.852647] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.852665] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.172 [2024-11-18 12:51:13.868038] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.172 [2024-11-18 12:51:13.868057] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:13.877837] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:13.877856] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:13.887292] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:13.887316] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:13.895999] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:13.896017] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:13.910897] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:13.910916] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:13.926358] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:13.926376] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:13.935395] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:13.935413] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:13.950772] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:13.950790] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 16474.67 IOPS, 128.71 MiB/s [2024-11-18T11:51:14.133Z] [2024-11-18 12:51:13.966145] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:13.966163] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:13.980238] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:13.980256] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:13.994424] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:13.994442] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:14.003329] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:14.003347] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:14.012862] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:14.012880] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:14.022022] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:14.022040] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:14.036482] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:14.036501] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:14.045398] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:14.045417] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:14.054568] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:14.054586] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:14.069005] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:14.069025] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:14.082959] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:14.082979] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:14.096980] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:14.097000] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:14.110550] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:14.110570] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.431 [2024-11-18 12:51:14.124962] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.431 [2024-11-18 12:51:14.124981] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.690 [2024-11-18 12:51:14.138587] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.690 [2024-11-18 12:51:14.138606] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.690 [2024-11-18 12:51:14.153020] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.690 [2024-11-18 12:51:14.153040] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.690 [2024-11-18 12:51:14.167628] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.690 [2024-11-18 12:51:14.167647] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.690 [2024-11-18 12:51:14.178585] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.690 [2024-11-18 12:51:14.178603] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.690 [2024-11-18 12:51:14.193220] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.690 [2024-11-18 12:51:14.193238] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.690 [2024-11-18 12:51:14.207166] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.690 [2024-11-18 12:51:14.207185] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.690 [2024-11-18 12:51:14.221458] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.690 [2024-11-18 12:51:14.221476] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.690 [2024-11-18 12:51:14.235420] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.690 [2024-11-18 12:51:14.235439] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.690 [2024-11-18 12:51:14.245132] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.690 [2024-11-18 12:51:14.245150] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.690 [2024-11-18 12:51:14.259636] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.690 [2024-11-18 12:51:14.259654] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.690 [2024-11-18 12:51:14.269269] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.690 [2024-11-18 12:51:14.269288] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.690 [2024-11-18 12:51:14.284711] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.690 [2024-11-18 12:51:14.284731] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.691 [2024-11-18 12:51:14.300154] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.691 [2024-11-18 12:51:14.300173] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.691 [2024-11-18 12:51:14.314297] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.691 [2024-11-18 12:51:14.314316] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.691 [2024-11-18 12:51:14.324018] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.691 [2024-11-18 12:51:14.324036] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.691 [2024-11-18 12:51:14.338998] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.691 [2024-11-18 12:51:14.339017] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.691 [2024-11-18 12:51:14.354301] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.691 [2024-11-18 12:51:14.354321] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.691 [2024-11-18 12:51:14.368698] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.691 [2024-11-18 12:51:14.368717] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.691 [2024-11-18 12:51:14.382819] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.691 [2024-11-18 12:51:14.382838] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.397005] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.397025] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.408387] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.408406] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.422774] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.422794] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.436475] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.436493] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.450729] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.450748] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.464725] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.464744] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.478980] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.478998] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.493095] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.493113] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.506861] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.506884] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.521253] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.521271] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.535471] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.535489] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.544681] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.544700] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.559569] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.559588] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.570664] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.570682] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.585067] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.950 [2024-11-18 12:51:14.585085] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.950 [2024-11-18 12:51:14.599088] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.951 [2024-11-18 12:51:14.599107] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.951 [2024-11-18 12:51:14.613020] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.951 [2024-11-18 12:51:14.613038] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.951 [2024-11-18 12:51:14.622695] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.951 [2024-11-18 12:51:14.622713] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.951 [2024-11-18 12:51:14.637278] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.951 [2024-11-18 12:51:14.637296] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.650915] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.650934] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.665085] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.665104] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.679287] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.679306] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.693619] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.693637] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.707493] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.707513] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.721931] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.721951] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.735853] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.735871] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.749939] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.749957] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.764058] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.764076] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.778325] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.778343] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.792197] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.792215] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.802171] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.802189] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.816491] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.816509] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.829900] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.829918] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.844377] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.844395] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.858365] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.858383] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.872481] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.872499] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.886081] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.886099] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.211 [2024-11-18 12:51:14.900778] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.211 [2024-11-18 12:51:14.900797] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.471 [2024-11-18 12:51:14.916525] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.471 [2024-11-18 12:51:14.916545] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.471 [2024-11-18 12:51:14.930641] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.471 [2024-11-18 12:51:14.930659] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.471 [2024-11-18 12:51:14.944761] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.471 [2024-11-18 12:51:14.944779] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.471 [2024-11-18 12:51:14.959172] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.471 [2024-11-18 12:51:14.959190] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.471 16463.75 IOPS, 128.62 MiB/s [2024-11-18T11:51:15.173Z] [2024-11-18 12:51:14.973461] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.471 [2024-11-18 12:51:14.973479] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.472 [2024-11-18 12:51:14.987464] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.472 [2024-11-18 12:51:14.987482] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.472 [2024-11-18 12:51:15.001154] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.472 [2024-11-18 12:51:15.001172] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.472 [2024-11-18 12:51:15.014888] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.472 [2024-11-18 12:51:15.014907] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.472 [2024-11-18 12:51:15.028885] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.472 [2024-11-18 12:51:15.028903] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.472 [2024-11-18 12:51:15.043385] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.472 [2024-11-18 12:51:15.043403] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.472 [2024-11-18 12:51:15.059469] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.472 [2024-11-18 12:51:15.059488] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.472 [2024-11-18 12:51:15.073422] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.472 [2024-11-18 12:51:15.073441] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.472 [2024-11-18 12:51:15.087277] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.472 [2024-11-18 12:51:15.087294] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.472 [2024-11-18 12:51:15.101442] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.472 [2024-11-18 12:51:15.101460] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.472 [2024-11-18 12:51:15.112548] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.472 [2024-11-18 12:51:15.112567] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.472 [2024-11-18 12:51:15.127811] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.472 [2024-11-18 12:51:15.127829] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.472 [2024-11-18 12:51:15.139237] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.472 [2024-11-18 12:51:15.139256] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.472 [2024-11-18 12:51:15.153675] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.472 [2024-11-18 12:51:15.153699] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.472 [2024-11-18 12:51:15.162883] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.472 [2024-11-18 12:51:15.162901] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.177115] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.177135] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.190799] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.190817] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.205305] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.205322] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.220703] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.220722] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.235130] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.235148] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.249719] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.249736] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.261216] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.261234] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.275585] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.275613] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.289454] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.289472] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.303802] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.303820] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.314306] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.314324] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.328721] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.328739] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.342882] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.342899] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.353625] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.353643] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.368272] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.368290] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.382847] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.382865] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.398107] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.398125] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.407648] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.407670] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.732 [2024-11-18 12:51:15.421901] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.732 [2024-11-18 12:51:15.421918] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.435638] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.435657] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.450158] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.450176] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.461774] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.461793] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.476339] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.476366] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.490360] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.490380] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.504830] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.504849] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.515604] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.515623] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.530451] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.530469] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.546150] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.546169] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.560627] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.560646] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.574646] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.574665] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.588947] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.588966] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.602759] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.602778] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.616898] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.616916] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.628323] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.628342] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.992 [2024-11-18 12:51:15.642888] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.992 [2024-11-18 12:51:15.642907] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.993 [2024-11-18 12:51:15.654090] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.993 [2024-11-18 12:51:15.654109] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.993 [2024-11-18 12:51:15.668333] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.993 [2024-11-18 12:51:15.668364] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.993 [2024-11-18 12:51:15.682592] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.993 [2024-11-18 12:51:15.682622] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.696972] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.696992] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.711326] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.711344] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.722850] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.722869] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.737571] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.737590] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.751545] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.751564] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.761270] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.761288] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.775596] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.775625] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.789835] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.789854] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.800544] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.800563] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.815129] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.815148] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.829031] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.829050] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.843009] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.843027] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.857044] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.857064] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.871311] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.871329] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.885147] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.885165] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.899796] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.899814] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.910819] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.910836] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.920336] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.920361] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.935164] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.935182] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.253 [2024-11-18 12:51:15.948505] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.253 [2024-11-18 12:51:15.948523] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 [2024-11-18 12:51:15.963214] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:15.963233] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 16473.80 IOPS, 128.70 MiB/s [2024-11-18T11:51:16.215Z] [2024-11-18 12:51:15.971381] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:15.971398] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 00:08:18.513 Latency(us) 00:08:18.513 [2024-11-18T11:51:16.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.513 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:18.513 Nvme1n1 : 5.01 16478.53 128.74 0.00 0.00 7760.83 3063.10 15728.64 00:08:18.513 [2024-11-18T11:51:16.215Z] =================================================================================================================== 00:08:18.513 [2024-11-18T11:51:16.215Z] Total : 16478.53 128.74 0.00 0.00 7760.83 3063.10 15728.64 00:08:18.513 [2024-11-18 12:51:15.983405] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:15.983420] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 [2024-11-18 12:51:15.995436] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:15.995449] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 [2024-11-18 12:51:16.007477] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:16.007500] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 [2024-11-18 12:51:16.019498] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:16.019512] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 [2024-11-18 12:51:16.031534] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:16.031549] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 [2024-11-18 12:51:16.043562] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:16.043576] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 [2024-11-18 12:51:16.055615] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:16.055630] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 [2024-11-18 12:51:16.067622] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:16.067637] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 [2024-11-18 12:51:16.079653] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:16.079665] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 [2024-11-18 12:51:16.091683] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:16.091693] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 [2024-11-18 12:51:16.103718] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:16.103729] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 [2024-11-18 12:51:16.115750] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:16.115761] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 [2024-11-18 12:51:16.127785] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.513 [2024-11-18 12:51:16.127795] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2200353) - No such process 00:08:18.513 12:51:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2200353 00:08:18.513 12:51:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.513 12:51:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.513 12:51:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:18.513 12:51:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.513 12:51:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:18.513 12:51:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.514 12:51:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:18.514 delay0 00:08:18.514 12:51:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.514 12:51:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:18.514 12:51:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.514 12:51:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:18.514 12:51:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.514 12:51:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:18.773 [2024-11-18 12:51:16.274667] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:25.355 Initializing NVMe Controllers 00:08:25.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:25.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:25.355 Initialization complete. Launching workers. 00:08:25.355 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 173 00:08:25.355 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 460, failed to submit 33 00:08:25.355 success 272, unsuccessful 188, failed 0 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:25.355 rmmod nvme_tcp 00:08:25.355 rmmod nvme_fabrics 00:08:25.355 rmmod nvme_keyring 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2197985 ']' 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2197985 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2197985 ']' 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2197985 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2197985 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2197985' 00:08:25.355 killing process with pid 2197985 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2197985 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2197985 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.355 12:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.269 12:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:27.269 00:08:27.269 real 0m31.570s 00:08:27.269 user 0m42.492s 00:08:27.269 sys 0m10.889s 00:08:27.269 12:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:27.269 12:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.269 ************************************ 00:08:27.269 END TEST nvmf_zcopy 00:08:27.269 ************************************ 00:08:27.269 12:51:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:27.269 12:51:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:27.269 12:51:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:27.269 12:51:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.269 ************************************ 00:08:27.269 START TEST nvmf_nmic 00:08:27.269 ************************************ 00:08:27.269 12:51:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:27.270 * Looking for test storage... 00:08:27.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.270 12:51:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:27.270 12:51:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:08:27.270 12:51:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.531 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:27.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.532 --rc genhtml_branch_coverage=1 00:08:27.532 --rc genhtml_function_coverage=1 00:08:27.532 --rc genhtml_legend=1 00:08:27.532 --rc geninfo_all_blocks=1 00:08:27.532 --rc geninfo_unexecuted_blocks=1 00:08:27.532 00:08:27.532 ' 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:27.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.532 --rc genhtml_branch_coverage=1 00:08:27.532 --rc genhtml_function_coverage=1 00:08:27.532 --rc genhtml_legend=1 00:08:27.532 --rc geninfo_all_blocks=1 00:08:27.532 --rc geninfo_unexecuted_blocks=1 00:08:27.532 00:08:27.532 ' 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:27.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.532 --rc genhtml_branch_coverage=1 00:08:27.532 --rc genhtml_function_coverage=1 00:08:27.532 --rc genhtml_legend=1 00:08:27.532 --rc geninfo_all_blocks=1 00:08:27.532 --rc geninfo_unexecuted_blocks=1 00:08:27.532 00:08:27.532 ' 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:27.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.532 --rc genhtml_branch_coverage=1 00:08:27.532 --rc genhtml_function_coverage=1 00:08:27.532 --rc genhtml_legend=1 00:08:27.532 --rc geninfo_all_blocks=1 00:08:27.532 --rc geninfo_unexecuted_blocks=1 00:08:27.532 00:08:27.532 ' 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:27.532 12:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:34.114 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:34.114 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:34.114 Found net devices under 0000:86:00.0: cvl_0_0 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:34.114 Found net devices under 0000:86:00.1: cvl_0_1 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.114 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:34.115 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:34.115 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.115 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.115 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:34.115 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:34.115 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.115 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.115 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.115 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.115 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:34.115 12:51:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:34.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:08:34.115 00:08:34.115 --- 10.0.0.2 ping statistics --- 00:08:34.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.115 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:08:34.115 00:08:34.115 --- 10.0.0.1 ping statistics --- 00:08:34.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.115 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2205800 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2205800 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2205800 ']' 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.115 [2024-11-18 12:51:31.138076] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:08:34.115 [2024-11-18 12:51:31.138129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.115 [2024-11-18 12:51:31.219985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.115 [2024-11-18 12:51:31.264185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.115 [2024-11-18 12:51:31.264224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.115 [2024-11-18 12:51:31.264231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.115 [2024-11-18 12:51:31.264237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.115 [2024-11-18 12:51:31.264242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.115 [2024-11-18 12:51:31.265792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.115 [2024-11-18 12:51:31.265905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.115 [2024-11-18 12:51:31.266010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.115 [2024-11-18 12:51:31.266011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.115 [2024-11-18 12:51:31.403607] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.115 Malloc0 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.115 [2024-11-18 12:51:31.465940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:34.115 test case1: single bdev can't be used in multiple subsystems 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.115 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.115 [2024-11-18 12:51:31.489824] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:34.115 [2024-11-18 12:51:31.489843] subsystem.c:2300:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:34.115 [2024-11-18 12:51:31.489850] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.115 request: 00:08:34.115 { 00:08:34.115 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:34.115 "namespace": { 00:08:34.115 "bdev_name": "Malloc0", 00:08:34.115 "no_auto_visible": false 00:08:34.115 }, 00:08:34.115 "method": "nvmf_subsystem_add_ns", 00:08:34.115 "req_id": 1 00:08:34.115 } 00:08:34.116 Got JSON-RPC error response 00:08:34.116 response: 00:08:34.116 { 00:08:34.116 "code": -32602, 00:08:34.116 "message": "Invalid parameters" 00:08:34.116 } 00:08:34.116 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:34.116 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:34.116 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:34.116 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:34.116 Adding namespace failed - expected result. 00:08:34.116 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:34.116 test case2: host connect to nvmf target in multiple paths 00:08:34.116 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:34.116 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.116 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.116 [2024-11-18 12:51:31.497958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:34.116 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.116 12:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.051 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:36.431 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:36.431 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:08:36.431 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:08:36.431 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:08:36.431 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:08:38.340 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:08:38.340 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:08:38.340 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:08:38.340 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:08:38.340 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:08:38.340 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:08:38.340 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:38.340 [global] 00:08:38.340 thread=1 00:08:38.340 invalidate=1 00:08:38.340 rw=write 00:08:38.340 time_based=1 00:08:38.340 runtime=1 00:08:38.340 ioengine=libaio 00:08:38.340 direct=1 00:08:38.340 bs=4096 00:08:38.340 iodepth=1 00:08:38.340 norandommap=0 00:08:38.340 numjobs=1 00:08:38.340 00:08:38.340 verify_dump=1 00:08:38.340 verify_backlog=512 00:08:38.340 verify_state_save=0 00:08:38.340 do_verify=1 00:08:38.340 verify=crc32c-intel 00:08:38.340 [job0] 00:08:38.340 filename=/dev/nvme0n1 00:08:38.340 Could not set queue depth (nvme0n1) 00:08:38.599 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:38.599 fio-3.35 00:08:38.599 Starting 1 thread 00:08:39.981 00:08:39.981 job0: (groupid=0, jobs=1): err= 0: pid=2206809: Mon Nov 18 12:51:37 2024 00:08:39.981 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:08:39.981 slat (nsec): min=9703, max=25437, avg=22532.32, stdev=2981.71 00:08:39.981 clat (usec): min=40447, max=41088, avg=40947.98, stdev=130.36 00:08:39.981 lat (usec): min=40457, max=41111, avg=40970.51, stdev=132.83 00:08:39.981 clat percentiles (usec): 00:08:39.981 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:08:39.981 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:39.981 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:39.981 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:39.981 | 99.99th=[41157] 00:08:39.981 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:08:39.981 slat (usec): min=10, max=27112, avg=65.28, stdev=1197.68 00:08:39.981 clat (usec): min=116, max=329, avg=143.78, stdev=22.52 00:08:39.981 lat (usec): min=128, max=27391, avg=209.06, stdev=1203.86 00:08:39.981 clat percentiles (usec): 00:08:39.981 | 1.00th=[ 121], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 127], 00:08:39.981 | 30.00th=[ 129], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 141], 00:08:39.981 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 176], 00:08:39.981 | 99.00th=[ 186], 99.50th=[ 251], 99.90th=[ 330], 99.95th=[ 330], 00:08:39.981 | 99.99th=[ 330] 00:08:39.981 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:39.981 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:39.981 lat (usec) : 250=95.32%, 500=0.56% 00:08:39.981 lat (msec) : 50=4.12% 00:08:39.981 cpu : usr=0.30%, sys=0.79%, ctx=537, majf=0, minf=1 00:08:39.981 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:39.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.981 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.981 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:39.981 00:08:39.981 Run status group 0 (all jobs): 00:08:39.981 READ: bw=87.1KiB/s (89.2kB/s), 87.1KiB/s-87.1KiB/s (89.2kB/s-89.2kB/s), io=88.0KiB (90.1kB), run=1010-1010msec 00:08:39.981 WRITE: bw=2028KiB/s (2076kB/s), 2028KiB/s-2028KiB/s (2076kB/s-2076kB/s), io=2048KiB (2097kB), run=1010-1010msec 00:08:39.981 00:08:39.981 Disk stats (read/write): 00:08:39.981 nvme0n1: ios=71/512, merge=0/0, ticks=1699/72, in_queue=1771, util=98.50% 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:39.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.981 rmmod nvme_tcp 00:08:39.981 rmmod nvme_fabrics 00:08:39.981 rmmod nvme_keyring 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2205800 ']' 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2205800 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2205800 ']' 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2205800 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2205800 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2205800' 00:08:39.981 killing process with pid 2205800 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2205800 00:08:39.981 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2205800 00:08:40.242 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:40.242 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:40.242 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:40.242 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:40.242 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:40.242 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:40.242 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:40.242 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:40.242 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:40.242 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.242 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.242 12:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.151 12:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:42.151 00:08:42.151 real 0m14.947s 00:08:42.151 user 0m33.084s 00:08:42.151 sys 0m5.305s 00:08:42.151 12:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.151 12:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.151 ************************************ 00:08:42.151 END TEST nvmf_nmic 00:08:42.151 ************************************ 00:08:42.151 12:51:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:42.151 12:51:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:42.151 12:51:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:42.151 12:51:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.411 ************************************ 00:08:42.411 START TEST nvmf_fio_target 00:08:42.411 ************************************ 00:08:42.411 12:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:42.411 * Looking for test storage... 00:08:42.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.411 12:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:42.411 12:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:08:42.411 12:51:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:42.411 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:42.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.412 --rc genhtml_branch_coverage=1 00:08:42.412 --rc genhtml_function_coverage=1 00:08:42.412 --rc genhtml_legend=1 00:08:42.412 --rc geninfo_all_blocks=1 00:08:42.412 --rc geninfo_unexecuted_blocks=1 00:08:42.412 00:08:42.412 ' 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:42.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.412 --rc genhtml_branch_coverage=1 00:08:42.412 --rc genhtml_function_coverage=1 00:08:42.412 --rc genhtml_legend=1 00:08:42.412 --rc geninfo_all_blocks=1 00:08:42.412 --rc geninfo_unexecuted_blocks=1 00:08:42.412 00:08:42.412 ' 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:42.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.412 --rc genhtml_branch_coverage=1 00:08:42.412 --rc genhtml_function_coverage=1 00:08:42.412 --rc genhtml_legend=1 00:08:42.412 --rc geninfo_all_blocks=1 00:08:42.412 --rc geninfo_unexecuted_blocks=1 00:08:42.412 00:08:42.412 ' 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:42.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.412 --rc genhtml_branch_coverage=1 00:08:42.412 --rc genhtml_function_coverage=1 00:08:42.412 --rc genhtml_legend=1 00:08:42.412 --rc geninfo_all_blocks=1 00:08:42.412 --rc geninfo_unexecuted_blocks=1 00:08:42.412 00:08:42.412 ' 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.412 12:51:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:49.102 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.102 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:49.102 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:49.103 Found net devices under 0000:86:00.0: cvl_0_0 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:49.103 Found net devices under 0000:86:00.1: cvl_0_1 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:49.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:08:49.103 00:08:49.103 --- 10.0.0.2 ping statistics --- 00:08:49.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.103 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:08:49.103 00:08:49.103 --- 10.0.0.1 ping statistics --- 00:08:49.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.103 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:49.103 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:49.103 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:49.103 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:49.103 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.103 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.103 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2210578 00:08:49.103 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2210578 00:08:49.103 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.103 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2210578 ']' 00:08:49.103 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.103 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:49.103 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.103 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:49.103 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.103 [2024-11-18 12:51:46.082748] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:08:49.103 [2024-11-18 12:51:46.082798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.103 [2024-11-18 12:51:46.173799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.103 [2024-11-18 12:51:46.216655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.103 [2024-11-18 12:51:46.216704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.103 [2024-11-18 12:51:46.216711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.103 [2024-11-18 12:51:46.216717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.103 [2024-11-18 12:51:46.216722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.103 [2024-11-18 12:51:46.218170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.103 [2024-11-18 12:51:46.218283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.103 [2024-11-18 12:51:46.218390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.103 [2024-11-18 12:51:46.218391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.363 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:49.363 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:08:49.363 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.363 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.363 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:49.363 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.363 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:49.623 [2024-11-18 12:51:47.146583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.623 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:49.883 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:49.883 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.144 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:50.144 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.144 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:50.144 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.405 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:50.405 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:50.664 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.924 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:50.924 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.184 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:51.184 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.184 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:51.184 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:51.444 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:51.702 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:51.702 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.961 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:51.961 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:52.221 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.221 [2024-11-18 12:51:49.876670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.221 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:52.481 12:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:52.740 12:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:54.121 12:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:54.121 12:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:08:54.121 12:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.121 12:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:08:54.121 12:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:08:54.122 12:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:08:56.032 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:08:56.032 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:08:56.032 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.032 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:08:56.032 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.032 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:08:56.032 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:56.032 [global] 00:08:56.032 thread=1 00:08:56.032 invalidate=1 00:08:56.032 rw=write 00:08:56.032 time_based=1 00:08:56.032 runtime=1 00:08:56.032 ioengine=libaio 00:08:56.032 direct=1 00:08:56.032 bs=4096 00:08:56.032 iodepth=1 00:08:56.032 norandommap=0 00:08:56.032 numjobs=1 00:08:56.032 00:08:56.032 verify_dump=1 00:08:56.032 verify_backlog=512 00:08:56.032 verify_state_save=0 00:08:56.032 do_verify=1 00:08:56.032 verify=crc32c-intel 00:08:56.032 [job0] 00:08:56.032 filename=/dev/nvme0n1 00:08:56.032 [job1] 00:08:56.032 filename=/dev/nvme0n2 00:08:56.032 [job2] 00:08:56.032 filename=/dev/nvme0n3 00:08:56.032 [job3] 00:08:56.032 filename=/dev/nvme0n4 00:08:56.032 Could not set queue depth (nvme0n1) 00:08:56.032 Could not set queue depth (nvme0n2) 00:08:56.032 Could not set queue depth (nvme0n3) 00:08:56.032 Could not set queue depth (nvme0n4) 00:08:56.292 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:56.292 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:56.292 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:56.292 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:56.292 fio-3.35 00:08:56.292 Starting 4 threads 00:08:57.673 00:08:57.673 job0: (groupid=0, jobs=1): err= 0: pid=2212112: Mon Nov 18 12:51:55 2024 00:08:57.673 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:08:57.674 slat (nsec): min=9838, max=26565, avg=22561.23, stdev=2951.15 00:08:57.674 clat (usec): min=40842, max=41053, avg=40971.09, stdev=56.20 00:08:57.674 lat (usec): min=40866, max=41076, avg=40993.65, stdev=56.63 00:08:57.674 clat percentiles (usec): 00:08:57.674 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:08:57.674 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:57.674 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:57.674 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:57.674 | 99.99th=[41157] 00:08:57.674 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:08:57.674 slat (nsec): min=9643, max=38183, avg=10690.50, stdev=1565.82 00:08:57.674 clat (usec): min=130, max=362, avg=194.93, stdev=35.51 00:08:57.674 lat (usec): min=139, max=401, avg=205.62, stdev=35.79 00:08:57.674 clat percentiles (usec): 00:08:57.674 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 153], 20.00th=[ 165], 00:08:57.674 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 192], 60.00th=[ 200], 00:08:57.674 | 70.00th=[ 208], 80.00th=[ 221], 90.00th=[ 249], 95.00th=[ 260], 00:08:57.674 | 99.00th=[ 273], 99.50th=[ 322], 99.90th=[ 363], 99.95th=[ 363], 00:08:57.674 | 99.99th=[ 363] 00:08:57.674 bw ( KiB/s): min= 4096, max= 4096, per=17.07%, avg=4096.00, stdev= 0.00, samples=1 00:08:57.674 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:57.674 lat (usec) : 250=86.52%, 500=9.36% 00:08:57.674 lat (msec) : 50=4.12% 00:08:57.674 cpu : usr=0.00%, sys=0.79%, ctx=536, majf=0, minf=2 00:08:57.674 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:57.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.674 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.674 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:57.674 job1: (groupid=0, jobs=1): err= 0: pid=2212139: Mon Nov 18 12:51:55 2024 00:08:57.674 read: IOPS=2312, BW=9251KiB/s (9473kB/s)(9260KiB/1001msec) 00:08:57.674 slat (nsec): min=5789, max=26559, avg=7439.48, stdev=1033.40 00:08:57.674 clat (usec): min=162, max=42232, avg=236.67, stdev=1216.66 00:08:57.674 lat (usec): min=169, max=42238, avg=244.11, stdev=1216.75 00:08:57.674 clat percentiles (usec): 00:08:57.674 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:08:57.674 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 202], 00:08:57.674 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 217], 95.00th=[ 223], 00:08:57.674 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 433], 99.95th=[41157], 00:08:57.674 | 99.99th=[42206] 00:08:57.674 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:57.674 slat (usec): min=6, max=623, avg=11.06, stdev=15.89 00:08:57.674 clat (usec): min=119, max=358, avg=154.66, stdev=19.90 00:08:57.674 lat (usec): min=129, max=789, avg=165.73, stdev=26.40 00:08:57.674 clat percentiles (usec): 00:08:57.674 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:08:57.674 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:08:57.674 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 186], 95.00th=[ 192], 00:08:57.674 | 99.00th=[ 210], 99.50th=[ 245], 99.90th=[ 281], 99.95th=[ 281], 00:08:57.674 | 99.99th=[ 359] 00:08:57.674 bw ( KiB/s): min=10000, max=10000, per=41.67%, avg=10000.00, stdev= 0.00, samples=1 00:08:57.674 iops : min= 2500, max= 2500, avg=2500.00, stdev= 0.00, samples=1 00:08:57.674 lat (usec) : 250=98.97%, 500=0.98% 00:08:57.674 lat (msec) : 50=0.04% 00:08:57.674 cpu : usr=2.00%, sys=4.90%, ctx=4880, majf=0, minf=1 00:08:57.674 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:57.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.674 issued rwts: total=2315,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.674 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:57.674 job2: (groupid=0, jobs=1): err= 0: pid=2212158: Mon Nov 18 12:51:55 2024 00:08:57.674 read: IOPS=2112, BW=8452KiB/s (8654kB/s)(8460KiB/1001msec) 00:08:57.674 slat (nsec): min=7141, max=26765, avg=8242.48, stdev=1084.95 00:08:57.674 clat (usec): min=172, max=478, avg=231.01, stdev=27.82 00:08:57.674 lat (usec): min=181, max=487, avg=239.25, stdev=27.87 00:08:57.674 clat percentiles (usec): 00:08:57.674 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:08:57.674 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 231], 00:08:57.674 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 285], 00:08:57.674 | 99.00th=[ 318], 99.50th=[ 371], 99.90th=[ 441], 99.95th=[ 445], 00:08:57.674 | 99.99th=[ 478] 00:08:57.674 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:57.674 slat (usec): min=10, max=18521, avg=19.23, stdev=365.83 00:08:57.674 clat (usec): min=130, max=303, avg=168.41, stdev=22.22 00:08:57.674 lat (usec): min=142, max=18820, avg=187.63, stdev=369.08 00:08:57.674 clat percentiles (usec): 00:08:57.674 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:08:57.674 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:08:57.674 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 194], 95.00th=[ 210], 00:08:57.674 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 293], 99.95th=[ 302], 00:08:57.674 | 99.99th=[ 306] 00:08:57.674 bw ( KiB/s): min= 9968, max= 9968, per=41.53%, avg=9968.00, stdev= 0.00, samples=1 00:08:57.674 iops : min= 2492, max= 2492, avg=2492.00, stdev= 0.00, samples=1 00:08:57.674 lat (usec) : 250=91.27%, 500=8.73% 00:08:57.674 cpu : usr=4.20%, sys=7.20%, ctx=4678, majf=0, minf=2 00:08:57.674 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:57.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.674 issued rwts: total=2115,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.674 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:57.674 job3: (groupid=0, jobs=1): err= 0: pid=2212159: Mon Nov 18 12:51:55 2024 00:08:57.674 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:08:57.674 slat (nsec): min=10789, max=22040, avg=12513.45, stdev=2956.66 00:08:57.674 clat (usec): min=236, max=41269, avg=39133.79, stdev=8688.25 00:08:57.674 lat (usec): min=248, max=41280, avg=39146.31, stdev=8688.50 00:08:57.674 clat percentiles (usec): 00:08:57.674 | 1.00th=[ 237], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:08:57.674 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:57.674 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:57.674 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:57.674 | 99.99th=[41157] 00:08:57.674 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:08:57.674 slat (usec): min=9, max=40714, avg=125.21, stdev=1961.19 00:08:57.674 clat (usec): min=123, max=362, avg=188.90, stdev=46.92 00:08:57.674 lat (usec): min=134, max=41076, avg=314.11, stdev=1970.97 00:08:57.674 clat percentiles (usec): 00:08:57.674 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 151], 00:08:57.674 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 202], 00:08:57.674 | 70.00th=[ 210], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 265], 00:08:57.674 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:08:57.674 | 99.99th=[ 363] 00:08:57.674 bw ( KiB/s): min= 4096, max= 4096, per=17.07%, avg=4096.00, stdev= 0.00, samples=1 00:08:57.674 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:57.674 lat (usec) : 250=86.33%, 500=9.74% 00:08:57.674 lat (msec) : 50=3.93% 00:08:57.674 cpu : usr=0.29%, sys=0.49%, ctx=537, majf=0, minf=1 00:08:57.674 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:57.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.674 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.674 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:57.674 00:08:57.674 Run status group 0 (all jobs): 00:08:57.674 READ: bw=17.1MiB/s (17.9MB/s), 85.9KiB/s-9251KiB/s (88.0kB/s-9473kB/s), io=17.5MiB (18.3MB), run=1001-1024msec 00:08:57.674 WRITE: bw=23.4MiB/s (24.6MB/s), 2000KiB/s-9.99MiB/s (2048kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1024msec 00:08:57.674 00:08:57.674 Disk stats (read/write): 00:08:57.674 nvme0n1: ios=39/512, merge=0/0, ticks=1560/103, in_queue=1663, util=84.77% 00:08:57.675 nvme0n2: ios=1921/2048, merge=0/0, ticks=626/301, in_queue=927, util=91.04% 00:08:57.675 nvme0n3: ios=1692/2048, merge=0/0, ticks=1268/324, in_queue=1592, util=92.71% 00:08:57.675 nvme0n4: ios=39/512, merge=0/0, ticks=1520/96, in_queue=1616, util=99.67% 00:08:57.675 12:51:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:57.675 [global] 00:08:57.675 thread=1 00:08:57.675 invalidate=1 00:08:57.675 rw=randwrite 00:08:57.675 time_based=1 00:08:57.675 runtime=1 00:08:57.675 ioengine=libaio 00:08:57.675 direct=1 00:08:57.675 bs=4096 00:08:57.675 iodepth=1 00:08:57.675 norandommap=0 00:08:57.675 numjobs=1 00:08:57.675 00:08:57.675 verify_dump=1 00:08:57.675 verify_backlog=512 00:08:57.675 verify_state_save=0 00:08:57.675 do_verify=1 00:08:57.675 verify=crc32c-intel 00:08:57.675 [job0] 00:08:57.675 filename=/dev/nvme0n1 00:08:57.675 [job1] 00:08:57.675 filename=/dev/nvme0n2 00:08:57.675 [job2] 00:08:57.675 filename=/dev/nvme0n3 00:08:57.675 [job3] 00:08:57.675 filename=/dev/nvme0n4 00:08:57.675 Could not set queue depth (nvme0n1) 00:08:57.675 Could not set queue depth (nvme0n2) 00:08:57.675 Could not set queue depth (nvme0n3) 00:08:57.675 Could not set queue depth (nvme0n4) 00:08:57.934 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.935 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.935 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.935 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.935 fio-3.35 00:08:57.935 Starting 4 threads 00:08:59.315 00:08:59.315 job0: (groupid=0, jobs=1): err= 0: pid=2212530: Mon Nov 18 12:51:56 2024 00:08:59.315 read: IOPS=941, BW=3764KiB/s (3854kB/s)(3892KiB/1034msec) 00:08:59.315 slat (nsec): min=7030, max=26767, avg=8265.73, stdev=1980.30 00:08:59.315 clat (usec): min=174, max=42955, avg=845.97, stdev=5039.39 00:08:59.315 lat (usec): min=182, max=42978, avg=854.24, stdev=5040.87 00:08:59.315 clat percentiles (usec): 00:08:59.315 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 202], 00:08:59.315 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:08:59.315 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 253], 00:08:59.315 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:08:59.316 | 99.99th=[42730] 00:08:59.316 write: IOPS=990, BW=3961KiB/s (4056kB/s)(4096KiB/1034msec); 0 zone resets 00:08:59.316 slat (nsec): min=10322, max=38016, avg=11449.19, stdev=1565.39 00:08:59.316 clat (usec): min=124, max=446, avg=178.10, stdev=41.42 00:08:59.316 lat (usec): min=134, max=484, avg=189.55, stdev=41.69 00:08:59.316 clat percentiles (usec): 00:08:59.316 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 149], 00:08:59.316 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:08:59.316 | 70.00th=[ 176], 80.00th=[ 233], 90.00th=[ 249], 95.00th=[ 260], 00:08:59.316 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 310], 99.95th=[ 445], 00:08:59.316 | 99.99th=[ 445] 00:08:59.316 bw ( KiB/s): min= 8192, max= 8192, per=45.96%, avg=8192.00, stdev= 0.00, samples=1 00:08:59.316 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:59.316 lat (usec) : 250=92.39%, 500=6.86% 00:08:59.316 lat (msec) : 50=0.75% 00:08:59.316 cpu : usr=1.55%, sys=3.10%, ctx=2000, majf=0, minf=1 00:08:59.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.316 issued rwts: total=973,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.316 job1: (groupid=0, jobs=1): err= 0: pid=2212531: Mon Nov 18 12:51:56 2024 00:08:59.316 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:08:59.316 slat (nsec): min=9942, max=23692, avg=22436.59, stdev=2826.90 00:08:59.316 clat (usec): min=40652, max=41106, avg=40953.88, stdev=91.43 00:08:59.316 lat (usec): min=40662, max=41129, avg=40976.31, stdev=93.34 00:08:59.316 clat percentiles (usec): 00:08:59.316 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:08:59.316 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:59.316 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:59.316 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:59.316 | 99.99th=[41157] 00:08:59.316 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:08:59.316 slat (usec): min=11, max=8487, avg=29.12, stdev=374.57 00:08:59.316 clat (usec): min=132, max=288, avg=193.68, stdev=29.20 00:08:59.316 lat (usec): min=146, max=8725, avg=222.80, stdev=377.61 00:08:59.316 clat percentiles (usec): 00:08:59.316 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 165], 00:08:59.316 | 30.00th=[ 174], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 202], 00:08:59.316 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 247], 00:08:59.316 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 289], 00:08:59.316 | 99.99th=[ 289] 00:08:59.316 bw ( KiB/s): min= 4096, max= 4096, per=22.98%, avg=4096.00, stdev= 0.00, samples=1 00:08:59.316 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:59.316 lat (usec) : 250=92.32%, 500=3.56% 00:08:59.316 lat (msec) : 50=4.12% 00:08:59.316 cpu : usr=0.10%, sys=1.37%, ctx=536, majf=0, minf=1 00:08:59.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.316 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.316 job2: (groupid=0, jobs=1): err= 0: pid=2212532: Mon Nov 18 12:51:56 2024 00:08:59.316 read: IOPS=1055, BW=4222KiB/s (4323kB/s)(4264KiB/1010msec) 00:08:59.316 slat (nsec): min=6712, max=23990, avg=7816.97, stdev=1668.79 00:08:59.316 clat (usec): min=173, max=41433, avg=651.24, stdev=4115.71 00:08:59.316 lat (usec): min=181, max=41441, avg=659.06, stdev=4116.22 00:08:59.316 clat percentiles (usec): 00:08:59.316 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 210], 00:08:59.316 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 231], 00:08:59.316 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 262], 95.00th=[ 289], 00:08:59.316 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:08:59.316 | 99.99th=[41681] 00:08:59.316 write: IOPS=1520, BW=6083KiB/s (6229kB/s)(6144KiB/1010msec); 0 zone resets 00:08:59.316 slat (nsec): min=9534, max=42280, avg=11334.50, stdev=1784.46 00:08:59.316 clat (usec): min=118, max=354, avg=183.59, stdev=48.18 00:08:59.316 lat (usec): min=128, max=368, avg=194.92, stdev=49.08 00:08:59.316 clat percentiles (usec): 00:08:59.316 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 143], 00:08:59.316 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 176], 00:08:59.316 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 260], 00:08:59.316 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 347], 99.95th=[ 355], 00:08:59.316 | 99.99th=[ 355] 00:08:59.316 bw ( KiB/s): min= 2336, max= 9952, per=34.47%, avg=6144.00, stdev=5385.33, samples=2 00:08:59.316 iops : min= 584, max= 2488, avg=1536.00, stdev=1346.33, samples=2 00:08:59.316 lat (usec) : 250=87.05%, 500=12.53% 00:08:59.316 lat (msec) : 50=0.42% 00:08:59.316 cpu : usr=1.09%, sys=2.78%, ctx=2604, majf=0, minf=1 00:08:59.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.316 issued rwts: total=1066,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.316 job3: (groupid=0, jobs=1): err= 0: pid=2212533: Mon Nov 18 12:51:56 2024 00:08:59.316 read: IOPS=1488, BW=5954KiB/s (6097kB/s)(5960KiB/1001msec) 00:08:59.316 slat (nsec): min=6366, max=27788, avg=7552.45, stdev=1517.82 00:08:59.316 clat (usec): min=171, max=41441, avg=499.61, stdev=3231.77 00:08:59.316 lat (usec): min=178, max=41451, avg=507.16, stdev=3232.91 00:08:59.316 clat percentiles (usec): 00:08:59.316 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 217], 00:08:59.316 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:08:59.316 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:08:59.316 | 99.00th=[ 277], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:08:59.316 | 99.99th=[41681] 00:08:59.316 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:08:59.316 slat (nsec): min=9125, max=40467, avg=10232.91, stdev=1505.85 00:08:59.316 clat (usec): min=110, max=651, avg=145.04, stdev=27.11 00:08:59.316 lat (usec): min=120, max=665, avg=155.28, stdev=27.53 00:08:59.316 clat percentiles (usec): 00:08:59.316 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 128], 00:08:59.316 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 145], 00:08:59.316 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 180], 00:08:59.316 | 99.00th=[ 198], 99.50th=[ 227], 99.90th=[ 490], 99.95th=[ 652], 00:08:59.316 | 99.99th=[ 652] 00:08:59.316 bw ( KiB/s): min= 4096, max= 4096, per=22.98%, avg=4096.00, stdev= 0.00, samples=1 00:08:59.316 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:59.316 lat (usec) : 250=84.93%, 500=14.71%, 750=0.03% 00:08:59.316 lat (msec) : 50=0.33% 00:08:59.316 cpu : usr=1.00%, sys=3.20%, ctx=3026, majf=0, minf=2 00:08:59.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.316 issued rwts: total=1490,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.316 00:08:59.316 Run status group 0 (all jobs): 00:08:59.316 READ: bw=13.4MiB/s (14.1MB/s), 86.3KiB/s-5954KiB/s (88.3kB/s-6097kB/s), io=13.9MiB (14.5MB), run=1001-1034msec 00:08:59.316 WRITE: bw=17.4MiB/s (18.3MB/s), 2008KiB/s-6138KiB/s (2056kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1034msec 00:08:59.316 00:08:59.316 Disk stats (read/write): 00:08:59.316 nvme0n1: ios=991/1024, merge=0/0, ticks=1508/171, in_queue=1679, util=89.68% 00:08:59.316 nvme0n2: ios=50/512, merge=0/0, ticks=1197/91, in_queue=1288, util=100.00% 00:08:59.316 nvme0n3: ios=1096/1536, merge=0/0, ticks=1500/276, in_queue=1776, util=93.86% 00:08:59.316 nvme0n4: ios=1081/1360, merge=0/0, ticks=698/196, in_queue=894, util=95.07% 00:08:59.317 12:51:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:59.317 [global] 00:08:59.317 thread=1 00:08:59.317 invalidate=1 00:08:59.317 rw=write 00:08:59.317 time_based=1 00:08:59.317 runtime=1 00:08:59.317 ioengine=libaio 00:08:59.317 direct=1 00:08:59.317 bs=4096 00:08:59.317 iodepth=128 00:08:59.317 norandommap=0 00:08:59.317 numjobs=1 00:08:59.317 00:08:59.317 verify_dump=1 00:08:59.317 verify_backlog=512 00:08:59.317 verify_state_save=0 00:08:59.317 do_verify=1 00:08:59.317 verify=crc32c-intel 00:08:59.317 [job0] 00:08:59.317 filename=/dev/nvme0n1 00:08:59.317 [job1] 00:08:59.317 filename=/dev/nvme0n2 00:08:59.317 [job2] 00:08:59.317 filename=/dev/nvme0n3 00:08:59.317 [job3] 00:08:59.317 filename=/dev/nvme0n4 00:08:59.317 Could not set queue depth (nvme0n1) 00:08:59.317 Could not set queue depth (nvme0n2) 00:08:59.317 Could not set queue depth (nvme0n3) 00:08:59.317 Could not set queue depth (nvme0n4) 00:08:59.577 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:59.577 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:59.577 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:59.577 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:59.577 fio-3.35 00:08:59.577 Starting 4 threads 00:09:00.958 00:09:00.958 job0: (groupid=0, jobs=1): err= 0: pid=2212905: Mon Nov 18 12:51:58 2024 00:09:00.958 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:09:00.958 slat (nsec): min=1481, max=12168k, avg=89626.49, stdev=545219.29 00:09:00.958 clat (usec): min=5910, max=48835, avg=11272.84, stdev=4720.95 00:09:00.958 lat (usec): min=5914, max=48863, avg=11362.46, stdev=4765.09 00:09:00.958 clat percentiles (usec): 00:09:00.958 | 1.00th=[ 7111], 5.00th=[ 8094], 10.00th=[ 9110], 20.00th=[ 9634], 00:09:00.958 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10683], 00:09:00.958 | 70.00th=[11207], 80.00th=[11863], 90.00th=[13042], 95.00th=[13304], 00:09:00.958 | 99.00th=[41157], 99.50th=[45876], 99.90th=[49021], 99.95th=[49021], 00:09:00.958 | 99.99th=[49021] 00:09:00.958 write: IOPS=4437, BW=17.3MiB/s (18.2MB/s)(17.5MiB/1011msec); 0 zone resets 00:09:00.958 slat (usec): min=2, max=41162, avg=134.89, stdev=939.09 00:09:00.958 clat (usec): min=3347, max=94480, avg=16894.87, stdev=18000.05 00:09:00.958 lat (usec): min=3356, max=94494, avg=17029.76, stdev=18126.83 00:09:00.958 clat percentiles (usec): 00:09:00.958 | 1.00th=[ 5080], 5.00th=[ 6849], 10.00th=[ 8848], 20.00th=[ 9503], 00:09:00.958 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10290], 00:09:00.958 | 70.00th=[10552], 80.00th=[12911], 90.00th=[44303], 95.00th=[61604], 00:09:00.958 | 99.00th=[89654], 99.50th=[89654], 99.90th=[94897], 99.95th=[94897], 00:09:00.958 | 99.99th=[94897] 00:09:00.958 bw ( KiB/s): min=10288, max=24576, per=26.27%, avg=17432.00, stdev=10103.14, samples=2 00:09:00.958 iops : min= 2572, max= 6144, avg=4358.00, stdev=2525.79, samples=2 00:09:00.958 lat (msec) : 4=0.05%, 10=41.58%, 20=48.31%, 50=5.99%, 100=4.08% 00:09:00.958 cpu : usr=3.96%, sys=5.15%, ctx=386, majf=0, minf=1 00:09:00.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:00.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.958 issued rwts: total=4096,4486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.958 job1: (groupid=0, jobs=1): err= 0: pid=2212906: Mon Nov 18 12:51:58 2024 00:09:00.958 read: IOPS=2287, BW=9149KiB/s (9369kB/s)(9204KiB/1006msec) 00:09:00.958 slat (nsec): min=1134, max=21099k, avg=208940.83, stdev=1316521.82 00:09:00.958 clat (usec): min=3283, max=88272, avg=24331.90, stdev=11641.02 00:09:00.958 lat (usec): min=8572, max=89913, avg=24540.84, stdev=11797.31 00:09:00.958 clat percentiles (usec): 00:09:00.958 | 1.00th=[ 8586], 5.00th=[12256], 10.00th=[13566], 20.00th=[16581], 00:09:00.958 | 30.00th=[17957], 40.00th=[18744], 50.00th=[20579], 60.00th=[24773], 00:09:00.958 | 70.00th=[26084], 80.00th=[31065], 90.00th=[40633], 95.00th=[47449], 00:09:00.958 | 99.00th=[68682], 99.50th=[76022], 99.90th=[77071], 99.95th=[77071], 00:09:00.958 | 99.99th=[88605] 00:09:00.958 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:09:00.958 slat (nsec): min=1914, max=11092k, avg=198183.57, stdev=957932.64 00:09:00.958 clat (msec): min=7, max=112, avg=27.36, stdev=24.54 00:09:00.958 lat (msec): min=7, max=112, avg=27.56, stdev=24.66 00:09:00.958 clat percentiles (msec): 00:09:00.958 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:09:00.958 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 20], 60.00th=[ 22], 00:09:00.958 | 70.00th=[ 23], 80.00th=[ 39], 90.00th=[ 63], 95.00th=[ 89], 00:09:00.958 | 99.00th=[ 106], 99.50th=[ 112], 99.90th=[ 113], 99.95th=[ 113], 00:09:00.958 | 99.99th=[ 113] 00:09:00.958 bw ( KiB/s): min= 9616, max=10864, per=15.43%, avg=10240.00, stdev=882.47, samples=2 00:09:00.958 iops : min= 2404, max= 2716, avg=2560.00, stdev=220.62, samples=2 00:09:00.959 lat (msec) : 4=0.02%, 10=2.26%, 20=48.39%, 50=39.07%, 100=9.05% 00:09:00.959 lat (msec) : 250=1.21% 00:09:00.959 cpu : usr=1.89%, sys=2.79%, ctx=273, majf=0, minf=1 00:09:00.959 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:00.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.959 issued rwts: total=2301,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.959 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.959 job2: (groupid=0, jobs=1): err= 0: pid=2212907: Mon Nov 18 12:51:58 2024 00:09:00.959 read: IOPS=3429, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1006msec) 00:09:00.959 slat (nsec): min=1374, max=12236k, avg=135295.58, stdev=906505.47 00:09:00.959 clat (usec): min=4395, max=58539, avg=15214.15, stdev=6845.36 00:09:00.959 lat (usec): min=6493, max=58548, avg=15349.44, stdev=6949.44 00:09:00.959 clat percentiles (usec): 00:09:00.959 | 1.00th=[ 8094], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11469], 00:09:00.959 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:09:00.959 | 70.00th=[14877], 80.00th=[17433], 90.00th=[22676], 95.00th=[29230], 00:09:00.959 | 99.00th=[46924], 99.50th=[53216], 99.90th=[58459], 99.95th=[58459], 00:09:00.959 | 99.99th=[58459] 00:09:00.959 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:09:00.959 slat (usec): min=2, max=9670, avg=142.38, stdev=697.67 00:09:00.959 clat (usec): min=1511, max=79449, avg=20959.70, stdev=15932.06 00:09:00.959 lat (usec): min=1525, max=79463, avg=21102.09, stdev=16032.44 00:09:00.959 clat percentiles (usec): 00:09:00.959 | 1.00th=[ 2089], 5.00th=[ 6063], 10.00th=[ 7635], 20.00th=[10552], 00:09:00.959 | 30.00th=[11338], 40.00th=[11994], 50.00th=[15664], 60.00th=[21890], 00:09:00.959 | 70.00th=[22938], 80.00th=[25297], 90.00th=[44827], 95.00th=[62653], 00:09:00.959 | 99.00th=[77071], 99.50th=[78119], 99.90th=[79168], 99.95th=[79168], 00:09:00.959 | 99.99th=[79168] 00:09:00.959 bw ( KiB/s): min=12176, max=16496, per=21.60%, avg=14336.00, stdev=3054.70, samples=2 00:09:00.959 iops : min= 3044, max= 4124, avg=3584.00, stdev=763.68, samples=2 00:09:00.959 lat (msec) : 2=0.50%, 4=1.09%, 10=10.15%, 20=58.49%, 50=25.53% 00:09:00.959 lat (msec) : 100=4.24% 00:09:00.959 cpu : usr=3.48%, sys=3.98%, ctx=382, majf=0, minf=2 00:09:00.959 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:00.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.959 issued rwts: total=3450,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.959 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.959 job3: (groupid=0, jobs=1): err= 0: pid=2212908: Mon Nov 18 12:51:58 2024 00:09:00.959 read: IOPS=6084, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1008msec) 00:09:00.959 slat (nsec): min=1357, max=10296k, avg=89833.36, stdev=655187.23 00:09:00.959 clat (usec): min=3238, max=22031, avg=11165.64, stdev=2892.25 00:09:00.959 lat (usec): min=3245, max=22059, avg=11255.47, stdev=2939.93 00:09:00.959 clat percentiles (usec): 00:09:00.959 | 1.00th=[ 4359], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8979], 00:09:00.959 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10945], 60.00th=[11338], 00:09:00.959 | 70.00th=[11863], 80.00th=[13042], 90.00th=[15139], 95.00th=[16909], 00:09:00.959 | 99.00th=[20055], 99.50th=[20841], 99.90th=[21365], 99.95th=[21365], 00:09:00.959 | 99.99th=[22152] 00:09:00.959 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:09:00.959 slat (usec): min=2, max=9210, avg=68.06, stdev=389.79 00:09:00.959 clat (usec): min=1498, max=21297, avg=9528.98, stdev=2190.76 00:09:00.959 lat (usec): min=1513, max=21322, avg=9597.04, stdev=2230.02 00:09:00.959 clat percentiles (usec): 00:09:00.959 | 1.00th=[ 2704], 5.00th=[ 4752], 10.00th=[ 6783], 20.00th=[ 8455], 00:09:00.959 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[10159], 00:09:00.959 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11600], 95.00th=[11731], 00:09:00.959 | 99.00th=[12649], 99.50th=[16319], 99.90th=[20841], 99.95th=[21103], 00:09:00.959 | 99.99th=[21365] 00:09:00.959 bw ( KiB/s): min=23088, max=26064, per=37.03%, avg=24576.00, stdev=2104.35, samples=2 00:09:00.959 iops : min= 5772, max= 6516, avg=6144.00, stdev=526.09, samples=2 00:09:00.959 lat (msec) : 2=0.06%, 4=1.92%, 10=47.62%, 20=49.80%, 50=0.60% 00:09:00.959 cpu : usr=4.47%, sys=6.45%, ctx=691, majf=0, minf=1 00:09:00.959 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:00.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.959 issued rwts: total=6133,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.959 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.959 00:09:00.959 Run status group 0 (all jobs): 00:09:00.959 READ: bw=61.7MiB/s (64.7MB/s), 9149KiB/s-23.8MiB/s (9369kB/s-24.9MB/s), io=62.4MiB (65.5MB), run=1006-1011msec 00:09:00.959 WRITE: bw=64.8MiB/s (68.0MB/s), 9.94MiB/s-23.8MiB/s (10.4MB/s-25.0MB/s), io=65.5MiB (68.7MB), run=1006-1011msec 00:09:00.959 00:09:00.959 Disk stats (read/write): 00:09:00.959 nvme0n1: ios=3606/4095, merge=0/0, ticks=21599/41219, in_queue=62818, util=91.38% 00:09:00.959 nvme0n2: ios=1586/2048, merge=0/0, ticks=14578/20895, in_queue=35473, util=95.51% 00:09:00.959 nvme0n3: ios=3116/3295, merge=0/0, ticks=43581/57481, in_queue=101062, util=99.58% 00:09:00.959 nvme0n4: ios=4747/5120, merge=0/0, ticks=53730/48915, in_queue=102645, util=99.79% 00:09:00.959 12:51:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:00.959 [global] 00:09:00.959 thread=1 00:09:00.959 invalidate=1 00:09:00.959 rw=randwrite 00:09:00.959 time_based=1 00:09:00.959 runtime=1 00:09:00.959 ioengine=libaio 00:09:00.959 direct=1 00:09:00.959 bs=4096 00:09:00.959 iodepth=128 00:09:00.959 norandommap=0 00:09:00.959 numjobs=1 00:09:00.959 00:09:00.959 verify_dump=1 00:09:00.959 verify_backlog=512 00:09:00.959 verify_state_save=0 00:09:00.959 do_verify=1 00:09:00.959 verify=crc32c-intel 00:09:00.959 [job0] 00:09:00.959 filename=/dev/nvme0n1 00:09:00.959 [job1] 00:09:00.959 filename=/dev/nvme0n2 00:09:00.959 [job2] 00:09:00.959 filename=/dev/nvme0n3 00:09:00.959 [job3] 00:09:00.959 filename=/dev/nvme0n4 00:09:00.959 Could not set queue depth (nvme0n1) 00:09:00.959 Could not set queue depth (nvme0n2) 00:09:00.959 Could not set queue depth (nvme0n3) 00:09:00.959 Could not set queue depth (nvme0n4) 00:09:01.218 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.218 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.218 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.218 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.218 fio-3.35 00:09:01.218 Starting 4 threads 00:09:02.596 00:09:02.596 job0: (groupid=0, jobs=1): err= 0: pid=2213280: Mon Nov 18 12:51:59 2024 00:09:02.596 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:09:02.596 slat (nsec): min=1439, max=24779k, avg=153177.46, stdev=1147701.81 00:09:02.596 clat (usec): min=9028, max=59810, avg=18704.18, stdev=10027.65 00:09:02.596 lat (usec): min=9037, max=64623, avg=18857.36, stdev=10135.07 00:09:02.596 clat percentiles (usec): 00:09:02.596 | 1.00th=[10290], 5.00th=[12649], 10.00th=[12780], 20.00th=[13435], 00:09:02.596 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:09:02.596 | 70.00th=[15664], 80.00th=[21890], 90.00th=[37487], 95.00th=[43254], 00:09:02.596 | 99.00th=[54789], 99.50th=[56886], 99.90th=[60031], 99.95th=[60031], 00:09:02.596 | 99.99th=[60031] 00:09:02.596 write: IOPS=2885, BW=11.3MiB/s (11.8MB/s)(11.4MiB/1009msec); 0 zone resets 00:09:02.596 slat (usec): min=2, max=10970, avg=204.57, stdev=934.99 00:09:02.596 clat (usec): min=1621, max=114965, avg=27520.39, stdev=19684.96 00:09:02.596 lat (usec): min=1630, max=114975, avg=27724.96, stdev=19786.46 00:09:02.596 clat percentiles (msec): 00:09:02.596 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 17], 00:09:02.596 | 30.00th=[ 21], 40.00th=[ 22], 50.00th=[ 22], 60.00th=[ 22], 00:09:02.596 | 70.00th=[ 22], 80.00th=[ 35], 90.00th=[ 53], 95.00th=[ 65], 00:09:02.596 | 99.00th=[ 109], 99.50th=[ 113], 99.90th=[ 115], 99.95th=[ 115], 00:09:02.596 | 99.99th=[ 115] 00:09:02.596 bw ( KiB/s): min= 9976, max=12288, per=15.37%, avg=11132.00, stdev=1634.83, samples=2 00:09:02.596 iops : min= 2494, max= 3072, avg=2783.00, stdev=408.71, samples=2 00:09:02.596 lat (msec) : 2=0.15%, 10=2.43%, 20=47.16%, 50=43.41%, 100=5.43% 00:09:02.596 lat (msec) : 250=1.43% 00:09:02.596 cpu : usr=1.79%, sys=3.77%, ctx=352, majf=0, minf=1 00:09:02.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:02.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.596 issued rwts: total=2560,2911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.596 job1: (groupid=0, jobs=1): err= 0: pid=2213281: Mon Nov 18 12:51:59 2024 00:09:02.596 read: IOPS=5852, BW=22.9MiB/s (24.0MB/s)(22.9MiB/1002msec) 00:09:02.596 slat (nsec): min=1420, max=3706.6k, avg=81520.41, stdev=416861.26 00:09:02.596 clat (usec): min=462, max=15229, avg=10600.68, stdev=1316.44 00:09:02.596 lat (usec): min=2173, max=15243, avg=10682.20, stdev=1334.05 00:09:02.596 clat percentiles (usec): 00:09:02.596 | 1.00th=[ 5735], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:09:02.596 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:09:02.596 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12125], 95.00th=[12387], 00:09:02.596 | 99.00th=[13435], 99.50th=[13566], 99.90th=[14091], 99.95th=[14484], 00:09:02.596 | 99.99th=[15270] 00:09:02.596 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:09:02.596 slat (usec): min=2, max=3765, avg=79.95, stdev=406.08 00:09:02.596 clat (usec): min=7165, max=15934, avg=10499.54, stdev=866.08 00:09:02.596 lat (usec): min=7192, max=15969, avg=10579.49, stdev=912.48 00:09:02.596 clat percentiles (usec): 00:09:02.596 | 1.00th=[ 7701], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10159], 00:09:02.596 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:09:02.596 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11731], 95.00th=[12387], 00:09:02.596 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14615], 99.95th=[15795], 00:09:02.596 | 99.99th=[15926] 00:09:02.596 bw ( KiB/s): min=24576, max=24576, per=33.93%, avg=24576.00, stdev= 0.00, samples=2 00:09:02.597 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:09:02.597 lat (usec) : 500=0.01% 00:09:02.597 lat (msec) : 4=0.18%, 10=18.69%, 20=81.12% 00:09:02.597 cpu : usr=5.69%, sys=5.29%, ctx=620, majf=0, minf=1 00:09:02.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:02.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.597 issued rwts: total=5864,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.597 job2: (groupid=0, jobs=1): err= 0: pid=2213282: Mon Nov 18 12:51:59 2024 00:09:02.597 read: IOPS=3289, BW=12.8MiB/s (13.5MB/s)(13.0MiB/1008msec) 00:09:02.597 slat (nsec): min=1494, max=14495k, avg=143923.49, stdev=925376.44 00:09:02.597 clat (usec): min=4558, max=50234, avg=15991.59, stdev=7180.32 00:09:02.597 lat (usec): min=6105, max=50243, avg=16135.52, stdev=7250.29 00:09:02.597 clat percentiles (usec): 00:09:02.597 | 1.00th=[ 7177], 5.00th=[ 9896], 10.00th=[11731], 20.00th=[12387], 00:09:02.597 | 30.00th=[12518], 40.00th=[12649], 50.00th=[13042], 60.00th=[13304], 00:09:02.597 | 70.00th=[15139], 80.00th=[17957], 90.00th=[27132], 95.00th=[32375], 00:09:02.597 | 99.00th=[43779], 99.50th=[45351], 99.90th=[47973], 99.95th=[50070], 00:09:02.597 | 99.99th=[50070] 00:09:02.597 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:09:02.597 slat (usec): min=2, max=9302, avg=140.91, stdev=583.55 00:09:02.597 clat (usec): min=1535, max=50238, avg=20854.11, stdev=9656.54 00:09:02.597 lat (usec): min=1549, max=50245, avg=20995.02, stdev=9719.63 00:09:02.597 clat percentiles (usec): 00:09:02.597 | 1.00th=[ 3458], 5.00th=[ 7635], 10.00th=[ 9896], 20.00th=[11076], 00:09:02.597 | 30.00th=[13435], 40.00th=[20317], 50.00th=[21365], 60.00th=[21627], 00:09:02.597 | 70.00th=[21890], 80.00th=[27919], 90.00th=[35390], 95.00th=[40109], 00:09:02.597 | 99.00th=[45876], 99.50th=[46400], 99.90th=[50070], 99.95th=[50070], 00:09:02.597 | 99.99th=[50070] 00:09:02.597 bw ( KiB/s): min=12304, max=16368, per=19.79%, avg=14336.00, stdev=2873.68, samples=2 00:09:02.597 iops : min= 3076, max= 4092, avg=3584.00, stdev=718.42, samples=2 00:09:02.597 lat (msec) : 2=0.16%, 4=0.52%, 10=7.78%, 20=50.22%, 50=41.22% 00:09:02.597 lat (msec) : 100=0.10% 00:09:02.597 cpu : usr=3.28%, sys=3.38%, ctx=440, majf=0, minf=1 00:09:02.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:02.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.597 issued rwts: total=3316,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.597 job3: (groupid=0, jobs=1): err= 0: pid=2213283: Mon Nov 18 12:51:59 2024 00:09:02.597 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(21.9MiB/1007msec) 00:09:02.597 slat (nsec): min=1292, max=10357k, avg=99158.71, stdev=704479.65 00:09:02.597 clat (usec): min=3696, max=21417, avg=12234.44, stdev=3115.38 00:09:02.597 lat (usec): min=4567, max=22899, avg=12333.60, stdev=3161.80 00:09:02.597 clat percentiles (usec): 00:09:02.597 | 1.00th=[ 5473], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[10552], 00:09:02.597 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:09:02.597 | 70.00th=[11731], 80.00th=[14746], 90.00th=[17433], 95.00th=[19006], 00:09:02.597 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21365], 99.95th=[21365], 00:09:02.597 | 99.99th=[21365] 00:09:02.597 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:09:02.597 slat (usec): min=2, max=8712, avg=73.61, stdev=291.21 00:09:02.597 clat (usec): min=1484, max=21316, avg=10468.16, stdev=2200.27 00:09:02.597 lat (usec): min=1497, max=21319, avg=10541.77, stdev=2220.94 00:09:02.597 clat percentiles (usec): 00:09:02.597 | 1.00th=[ 3851], 5.00th=[ 5604], 10.00th=[ 6915], 20.00th=[ 9241], 00:09:02.597 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11338], 00:09:02.597 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11731], 95.00th=[12780], 00:09:02.597 | 99.00th=[14353], 99.50th=[14353], 99.90th=[20841], 99.95th=[21103], 00:09:02.597 | 99.99th=[21365] 00:09:02.597 bw ( KiB/s): min=21136, max=23920, per=31.10%, avg=22528.00, stdev=1968.59, samples=2 00:09:02.597 iops : min= 5284, max= 5980, avg=5632.00, stdev=492.15, samples=2 00:09:02.597 lat (msec) : 2=0.12%, 4=0.45%, 10=19.62%, 20=78.62%, 50=1.19% 00:09:02.597 cpu : usr=4.47%, sys=5.77%, ctx=724, majf=0, minf=2 00:09:02.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:02.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.597 issued rwts: total=5616,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.597 00:09:02.597 Run status group 0 (all jobs): 00:09:02.597 READ: bw=67.2MiB/s (70.5MB/s), 9.91MiB/s-22.9MiB/s (10.4MB/s-24.0MB/s), io=67.8MiB (71.1MB), run=1002-1009msec 00:09:02.597 WRITE: bw=70.7MiB/s (74.2MB/s), 11.3MiB/s-24.0MiB/s (11.8MB/s-25.1MB/s), io=71.4MiB (74.8MB), run=1002-1009msec 00:09:02.597 00:09:02.597 Disk stats (read/write): 00:09:02.597 nvme0n1: ios=2071/2447, merge=0/0, ticks=20202/34946, in_queue=55148, util=98.30% 00:09:02.597 nvme0n2: ios=5073/5120, merge=0/0, ticks=18665/16459, in_queue=35124, util=98.48% 00:09:02.597 nvme0n3: ios=2971/3072, merge=0/0, ticks=46021/60298, in_queue=106319, util=99.06% 00:09:02.597 nvme0n4: ios=4630/4935, merge=0/0, ticks=55110/50673, in_queue=105783, util=98.32% 00:09:02.597 12:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:02.597 12:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2213517 00:09:02.597 12:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:02.597 12:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:02.597 [global] 00:09:02.597 thread=1 00:09:02.597 invalidate=1 00:09:02.597 rw=read 00:09:02.597 time_based=1 00:09:02.597 runtime=10 00:09:02.597 ioengine=libaio 00:09:02.597 direct=1 00:09:02.597 bs=4096 00:09:02.597 iodepth=1 00:09:02.597 norandommap=1 00:09:02.597 numjobs=1 00:09:02.597 00:09:02.597 [job0] 00:09:02.597 filename=/dev/nvme0n1 00:09:02.597 [job1] 00:09:02.597 filename=/dev/nvme0n2 00:09:02.597 [job2] 00:09:02.597 filename=/dev/nvme0n3 00:09:02.597 [job3] 00:09:02.597 filename=/dev/nvme0n4 00:09:02.597 Could not set queue depth (nvme0n1) 00:09:02.597 Could not set queue depth (nvme0n2) 00:09:02.597 Could not set queue depth (nvme0n3) 00:09:02.597 Could not set queue depth (nvme0n4) 00:09:02.597 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.597 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.597 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.597 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.597 fio-3.35 00:09:02.597 Starting 4 threads 00:09:05.887 12:52:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:05.887 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=25501696, buflen=4096 00:09:05.887 fio: pid=2213658, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:05.887 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:05.887 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=54124544, buflen=4096 00:09:05.887 fio: pid=2213657, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:05.887 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:05.887 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:05.887 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:05.887 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:05.887 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=520192, buflen=4096 00:09:05.887 fio: pid=2213655, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:06.146 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=19447808, buflen=4096 00:09:06.146 fio: pid=2213656, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:06.146 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.146 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:06.146 00:09:06.146 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2213655: Mon Nov 18 12:52:03 2024 00:09:06.146 read: IOPS=40, BW=161KiB/s (165kB/s)(508KiB/3148msec) 00:09:06.146 slat (usec): min=7, max=12752, avg=117.18, stdev=1125.60 00:09:06.146 clat (usec): min=225, max=42018, avg=24493.42, stdev=20005.08 00:09:06.146 lat (usec): min=237, max=42042, avg=24511.11, stdev=20005.13 00:09:06.146 clat percentiles (usec): 00:09:06.146 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 258], 00:09:06.147 | 30.00th=[ 269], 40.00th=[ 449], 50.00th=[40633], 60.00th=[41157], 00:09:06.147 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:06.147 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:06.147 | 99.99th=[42206] 00:09:06.147 bw ( KiB/s): min= 96, max= 216, per=0.56%, avg=162.17, stdev=41.45, samples=6 00:09:06.147 iops : min= 24, max= 54, avg=40.50, stdev=10.35, samples=6 00:09:06.147 lat (usec) : 250=13.28%, 500=26.56% 00:09:06.147 lat (msec) : 50=59.38% 00:09:06.147 cpu : usr=0.16%, sys=0.00%, ctx=130, majf=0, minf=1 00:09:06.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.147 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.147 issued rwts: total=128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.147 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2213656: Mon Nov 18 12:52:03 2024 00:09:06.147 read: IOPS=1419, BW=5678KiB/s (5814kB/s)(18.5MiB/3345msec) 00:09:06.147 slat (usec): min=5, max=9718, avg=10.40, stdev=163.41 00:09:06.147 clat (usec): min=186, max=41992, avg=688.54, stdev=4329.78 00:09:06.147 lat (usec): min=193, max=50937, avg=698.93, stdev=4364.71 00:09:06.147 clat percentiles (usec): 00:09:06.147 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:09:06.147 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 227], 00:09:06.147 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 251], 00:09:06.147 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:09:06.147 | 99.99th=[42206] 00:09:06.147 bw ( KiB/s): min= 93, max=17448, per=21.73%, avg=6319.50, stdev=8539.85, samples=6 00:09:06.147 iops : min= 23, max= 4362, avg=1579.83, stdev=2135.00, samples=6 00:09:06.147 lat (usec) : 250=94.21%, 500=4.61%, 750=0.02% 00:09:06.147 lat (msec) : 50=1.14% 00:09:06.147 cpu : usr=0.39%, sys=1.23%, ctx=4752, majf=0, minf=1 00:09:06.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.147 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.147 issued rwts: total=4749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.147 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2213657: Mon Nov 18 12:52:03 2024 00:09:06.147 read: IOPS=4530, BW=17.7MiB/s (18.6MB/s)(51.6MiB/2917msec) 00:09:06.147 slat (usec): min=4, max=15629, avg= 9.62, stdev=189.30 00:09:06.147 clat (usec): min=149, max=537, avg=208.42, stdev=20.95 00:09:06.147 lat (usec): min=154, max=15998, avg=218.04, stdev=192.42 00:09:06.147 clat percentiles (usec): 00:09:06.147 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:09:06.147 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:09:06.147 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 245], 00:09:06.147 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 314], 99.95th=[ 371], 00:09:06.147 | 99.99th=[ 519] 00:09:06.147 bw ( KiB/s): min=16968, max=18896, per=62.81%, avg=18264.00, stdev=789.37, samples=5 00:09:06.147 iops : min= 4242, max= 4724, avg=4566.00, stdev=197.34, samples=5 00:09:06.147 lat (usec) : 250=95.69%, 500=4.28%, 750=0.02% 00:09:06.147 cpu : usr=1.41%, sys=3.81%, ctx=13217, majf=0, minf=2 00:09:06.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.147 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.147 issued rwts: total=13215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.147 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2213658: Mon Nov 18 12:52:03 2024 00:09:06.147 read: IOPS=2303, BW=9213KiB/s (9435kB/s)(24.3MiB/2703msec) 00:09:06.147 slat (nsec): min=6968, max=35302, avg=9389.91, stdev=1623.10 00:09:06.147 clat (usec): min=175, max=41959, avg=421.63, stdev=2642.29 00:09:06.147 lat (usec): min=183, max=41982, avg=431.02, stdev=2643.12 00:09:06.147 clat percentiles (usec): 00:09:06.147 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:09:06.147 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:09:06.147 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:09:06.147 | 99.00th=[ 297], 99.50th=[ 429], 99.90th=[41157], 99.95th=[41157], 00:09:06.147 | 99.99th=[42206] 00:09:06.147 bw ( KiB/s): min= 96, max=15520, per=30.19%, avg=8779.20, stdev=8007.38, samples=5 00:09:06.147 iops : min= 24, max= 3880, avg=2194.80, stdev=2001.84, samples=5 00:09:06.147 lat (usec) : 250=59.16%, 500=40.37%, 750=0.02% 00:09:06.147 lat (msec) : 20=0.02%, 50=0.42% 00:09:06.147 cpu : usr=0.81%, sys=2.66%, ctx=6227, majf=0, minf=1 00:09:06.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.147 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.147 issued rwts: total=6227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.147 00:09:06.147 Run status group 0 (all jobs): 00:09:06.147 READ: bw=28.4MiB/s (29.8MB/s), 161KiB/s-17.7MiB/s (165kB/s-18.6MB/s), io=95.0MiB (99.6MB), run=2703-3345msec 00:09:06.147 00:09:06.147 Disk stats (read/write): 00:09:06.147 nvme0n1: ios=126/0, merge=0/0, ticks=3073/0, in_queue=3073, util=95.69% 00:09:06.147 nvme0n2: ios=4742/0, merge=0/0, ticks=2998/0, in_queue=2998, util=95.79% 00:09:06.147 nvme0n3: ios=12998/0, merge=0/0, ticks=2649/0, in_queue=2649, util=95.50% 00:09:06.147 nvme0n4: ios=5875/0, merge=0/0, ticks=2514/0, in_queue=2514, util=96.45% 00:09:06.406 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.406 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:06.665 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.665 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:06.925 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.925 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:06.925 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.925 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:07.184 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:07.184 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2213517 00:09:07.184 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:07.184 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.184 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:07.184 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:07.443 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:07.443 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.443 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:07.443 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.443 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:07.443 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:07.443 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:07.443 nvmf hotplug test: fio failed as expected 00:09:07.443 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.443 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:07.443 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:07.443 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:07.443 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:07.443 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:07.443 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.443 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:07.443 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.443 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:07.443 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.443 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.703 rmmod nvme_tcp 00:09:07.703 rmmod nvme_fabrics 00:09:07.703 rmmod nvme_keyring 00:09:07.703 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.703 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:07.703 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:07.703 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2210578 ']' 00:09:07.703 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2210578 00:09:07.703 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2210578 ']' 00:09:07.703 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2210578 00:09:07.703 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:07.703 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:07.703 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2210578 00:09:07.703 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:07.703 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:07.703 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2210578' 00:09:07.703 killing process with pid 2210578 00:09:07.704 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2210578 00:09:07.704 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2210578 00:09:07.963 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:07.963 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:07.963 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:07.963 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:07.963 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:07.963 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:07.963 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:07.963 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:07.963 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:07.963 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.963 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.963 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.873 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:09.873 00:09:09.873 real 0m27.611s 00:09:09.873 user 1m50.066s 00:09:09.873 sys 0m8.642s 00:09:09.873 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:09.873 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:09.873 ************************************ 00:09:09.873 END TEST nvmf_fio_target 00:09:09.873 ************************************ 00:09:09.873 12:52:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:09.873 12:52:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:09.873 12:52:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:09.873 12:52:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.873 ************************************ 00:09:09.873 START TEST nvmf_bdevio 00:09:09.873 ************************************ 00:09:09.873 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:10.133 * Looking for test storage... 00:09:10.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:10.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.133 --rc genhtml_branch_coverage=1 00:09:10.133 --rc genhtml_function_coverage=1 00:09:10.133 --rc genhtml_legend=1 00:09:10.133 --rc geninfo_all_blocks=1 00:09:10.133 --rc geninfo_unexecuted_blocks=1 00:09:10.133 00:09:10.133 ' 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:10.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.133 --rc genhtml_branch_coverage=1 00:09:10.133 --rc genhtml_function_coverage=1 00:09:10.133 --rc genhtml_legend=1 00:09:10.133 --rc geninfo_all_blocks=1 00:09:10.133 --rc geninfo_unexecuted_blocks=1 00:09:10.133 00:09:10.133 ' 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:10.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.133 --rc genhtml_branch_coverage=1 00:09:10.133 --rc genhtml_function_coverage=1 00:09:10.133 --rc genhtml_legend=1 00:09:10.133 --rc geninfo_all_blocks=1 00:09:10.133 --rc geninfo_unexecuted_blocks=1 00:09:10.133 00:09:10.133 ' 00:09:10.133 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:10.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.133 --rc genhtml_branch_coverage=1 00:09:10.133 --rc genhtml_function_coverage=1 00:09:10.133 --rc genhtml_legend=1 00:09:10.133 --rc geninfo_all_blocks=1 00:09:10.133 --rc geninfo_unexecuted_blocks=1 00:09:10.133 00:09:10.133 ' 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:10.134 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:16.707 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:16.707 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:16.707 Found net devices under 0000:86:00.0: cvl_0_0 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:16.707 Found net devices under 0000:86:00.1: cvl_0_1 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:16.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:09:16.707 00:09:16.707 --- 10.0.0.2 ping statistics --- 00:09:16.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.707 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:16.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:09:16.707 00:09:16.707 --- 10.0.0.1 ping statistics --- 00:09:16.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.707 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2218089 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2218089 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2218089 ']' 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:16.707 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.707 [2024-11-18 12:52:13.825993] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:09:16.707 [2024-11-18 12:52:13.826044] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.707 [2024-11-18 12:52:13.905052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.707 [2024-11-18 12:52:13.949837] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.707 [2024-11-18 12:52:13.949871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.707 [2024-11-18 12:52:13.949881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.707 [2024-11-18 12:52:13.949887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.707 [2024-11-18 12:52:13.949892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.708 [2024-11-18 12:52:13.951551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:16.708 [2024-11-18 12:52:13.951658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:16.708 [2024-11-18 12:52:13.951766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.708 [2024-11-18 12:52:13.951767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.708 [2024-11-18 12:52:14.088321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.708 Malloc0 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.708 [2024-11-18 12:52:14.165575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:16.708 { 00:09:16.708 "params": { 00:09:16.708 "name": "Nvme$subsystem", 00:09:16.708 "trtype": "$TEST_TRANSPORT", 00:09:16.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:16.708 "adrfam": "ipv4", 00:09:16.708 "trsvcid": "$NVMF_PORT", 00:09:16.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:16.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:16.708 "hdgst": ${hdgst:-false}, 00:09:16.708 "ddgst": ${ddgst:-false} 00:09:16.708 }, 00:09:16.708 "method": "bdev_nvme_attach_controller" 00:09:16.708 } 00:09:16.708 EOF 00:09:16.708 )") 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:16.708 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:16.708 "params": { 00:09:16.708 "name": "Nvme1", 00:09:16.708 "trtype": "tcp", 00:09:16.708 "traddr": "10.0.0.2", 00:09:16.708 "adrfam": "ipv4", 00:09:16.708 "trsvcid": "4420", 00:09:16.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:16.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:16.708 "hdgst": false, 00:09:16.708 "ddgst": false 00:09:16.708 }, 00:09:16.708 "method": "bdev_nvme_attach_controller" 00:09:16.708 }' 00:09:16.708 [2024-11-18 12:52:14.214877] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:09:16.708 [2024-11-18 12:52:14.214917] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218158 ] 00:09:16.708 [2024-11-18 12:52:14.289685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:16.708 [2024-11-18 12:52:14.334430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.708 [2024-11-18 12:52:14.334538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.708 [2024-11-18 12:52:14.334539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.965 I/O targets: 00:09:16.965 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:16.965 00:09:16.965 00:09:16.965 CUnit - A unit testing framework for C - Version 2.1-3 00:09:16.965 http://cunit.sourceforge.net/ 00:09:16.965 00:09:16.965 00:09:16.965 Suite: bdevio tests on: Nvme1n1 00:09:16.965 Test: blockdev write read block ...passed 00:09:17.222 Test: blockdev write zeroes read block ...passed 00:09:17.222 Test: blockdev write zeroes read no split ...passed 00:09:17.222 Test: blockdev write zeroes read split ...passed 00:09:17.222 Test: blockdev write zeroes read split partial ...passed 00:09:17.222 Test: blockdev reset ...[2024-11-18 12:52:14.729071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:17.222 [2024-11-18 12:52:14.729136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc00340 (9): Bad file descriptor 00:09:17.222 [2024-11-18 12:52:14.783572] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:17.222 passed 00:09:17.222 Test: blockdev write read 8 blocks ...passed 00:09:17.222 Test: blockdev write read size > 128k ...passed 00:09:17.222 Test: blockdev write read invalid size ...passed 00:09:17.222 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:17.222 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:17.222 Test: blockdev write read max offset ...passed 00:09:17.222 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:17.479 Test: blockdev writev readv 8 blocks ...passed 00:09:17.479 Test: blockdev writev readv 30 x 1block ...passed 00:09:17.479 Test: blockdev writev readv block ...passed 00:09:17.479 Test: blockdev writev readv size > 128k ...passed 00:09:17.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:17.479 Test: blockdev comparev and writev ...[2024-11-18 12:52:15.036124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.479 [2024-11-18 12:52:15.036154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:17.479 [2024-11-18 12:52:15.036172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.479 [2024-11-18 12:52:15.036180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:17.479 [2024-11-18 12:52:15.036421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.479 [2024-11-18 12:52:15.036432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:17.479 [2024-11-18 12:52:15.036444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.479 [2024-11-18 12:52:15.036451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:17.479 [2024-11-18 12:52:15.036689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.479 [2024-11-18 12:52:15.036700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:17.479 [2024-11-18 12:52:15.036712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.479 [2024-11-18 12:52:15.036719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:17.479 [2024-11-18 12:52:15.036970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.479 [2024-11-18 12:52:15.036980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:17.479 [2024-11-18 12:52:15.036991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:17.479 [2024-11-18 12:52:15.036998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:17.479 passed 00:09:17.479 Test: blockdev nvme passthru rw ...passed 00:09:17.479 Test: blockdev nvme passthru vendor specific ...[2024-11-18 12:52:15.118775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:17.479 [2024-11-18 12:52:15.118791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:17.479 [2024-11-18 12:52:15.118894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:17.479 [2024-11-18 12:52:15.118904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:17.479 [2024-11-18 12:52:15.119003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:17.479 [2024-11-18 12:52:15.119013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:17.479 [2024-11-18 12:52:15.119120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:17.479 [2024-11-18 12:52:15.119129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:17.479 passed 00:09:17.479 Test: blockdev nvme admin passthru ...passed 00:09:17.479 Test: blockdev copy ...passed 00:09:17.479 00:09:17.479 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.479 suites 1 1 n/a 0 0 00:09:17.479 tests 23 23 23 0 0 00:09:17.479 asserts 152 152 152 0 n/a 00:09:17.479 00:09:17.479 Elapsed time = 1.121 seconds 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.737 rmmod nvme_tcp 00:09:17.737 rmmod nvme_fabrics 00:09:17.737 rmmod nvme_keyring 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2218089 ']' 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2218089 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2218089 ']' 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2218089 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:17.737 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2218089 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2218089' 00:09:17.996 killing process with pid 2218089 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2218089 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2218089 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.996 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:20.537 00:09:20.537 real 0m10.126s 00:09:20.537 user 0m10.611s 00:09:20.537 sys 0m5.073s 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 ************************************ 00:09:20.537 END TEST nvmf_bdevio 00:09:20.537 ************************************ 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:20.537 00:09:20.537 real 4m36.224s 00:09:20.537 user 10m20.826s 00:09:20.537 sys 1m37.677s 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 ************************************ 00:09:20.537 END TEST nvmf_target_core 00:09:20.537 ************************************ 00:09:20.537 12:52:17 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:20.537 12:52:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:20.537 12:52:17 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:20.537 12:52:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 ************************************ 00:09:20.537 START TEST nvmf_target_extra 00:09:20.537 ************************************ 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:20.537 * Looking for test storage... 00:09:20.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:20.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.537 --rc genhtml_branch_coverage=1 00:09:20.537 --rc genhtml_function_coverage=1 00:09:20.537 --rc genhtml_legend=1 00:09:20.537 --rc geninfo_all_blocks=1 00:09:20.537 --rc geninfo_unexecuted_blocks=1 00:09:20.537 00:09:20.537 ' 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:20.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.537 --rc genhtml_branch_coverage=1 00:09:20.537 --rc genhtml_function_coverage=1 00:09:20.537 --rc genhtml_legend=1 00:09:20.537 --rc geninfo_all_blocks=1 00:09:20.537 --rc geninfo_unexecuted_blocks=1 00:09:20.537 00:09:20.537 ' 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:20.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.537 --rc genhtml_branch_coverage=1 00:09:20.537 --rc genhtml_function_coverage=1 00:09:20.537 --rc genhtml_legend=1 00:09:20.537 --rc geninfo_all_blocks=1 00:09:20.537 --rc geninfo_unexecuted_blocks=1 00:09:20.537 00:09:20.537 ' 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:20.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.537 --rc genhtml_branch_coverage=1 00:09:20.537 --rc genhtml_function_coverage=1 00:09:20.537 --rc genhtml_legend=1 00:09:20.537 --rc geninfo_all_blocks=1 00:09:20.537 --rc geninfo_unexecuted_blocks=1 00:09:20.537 00:09:20.537 ' 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.537 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.538 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.538 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.538 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.538 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.538 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.538 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.538 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.538 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.538 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.538 12:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.538 12:52:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:20.538 ************************************ 00:09:20.538 START TEST nvmf_example 00:09:20.538 ************************************ 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:20.538 * Looking for test storage... 00:09:20.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:20.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.538 --rc genhtml_branch_coverage=1 00:09:20.538 --rc genhtml_function_coverage=1 00:09:20.538 --rc genhtml_legend=1 00:09:20.538 --rc geninfo_all_blocks=1 00:09:20.538 --rc geninfo_unexecuted_blocks=1 00:09:20.538 00:09:20.538 ' 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:20.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.538 --rc genhtml_branch_coverage=1 00:09:20.538 --rc genhtml_function_coverage=1 00:09:20.538 --rc genhtml_legend=1 00:09:20.538 --rc geninfo_all_blocks=1 00:09:20.538 --rc geninfo_unexecuted_blocks=1 00:09:20.538 00:09:20.538 ' 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:20.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.538 --rc genhtml_branch_coverage=1 00:09:20.538 --rc genhtml_function_coverage=1 00:09:20.538 --rc genhtml_legend=1 00:09:20.538 --rc geninfo_all_blocks=1 00:09:20.538 --rc geninfo_unexecuted_blocks=1 00:09:20.538 00:09:20.538 ' 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:20.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.538 --rc genhtml_branch_coverage=1 00:09:20.538 --rc genhtml_function_coverage=1 00:09:20.538 --rc genhtml_legend=1 00:09:20.538 --rc geninfo_all_blocks=1 00:09:20.538 --rc geninfo_unexecuted_blocks=1 00:09:20.538 00:09:20.538 ' 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.538 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.539 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:20.799 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:20.799 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:20.800 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:27.375 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:27.375 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:27.375 Found net devices under 0000:86:00.0: cvl_0_0 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:27.375 Found net devices under 0000:86:00.1: cvl_0_1 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:27.375 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:27.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:09:27.375 00:09:27.375 --- 10.0.0.2 ping statistics --- 00:09:27.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.375 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:09:27.375 00:09:27.375 --- 10.0.0.1 ping statistics --- 00:09:27.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.375 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:27.375 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2221979 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2221979 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 2221979 ']' 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:27.376 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:27.634 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:39.839 Initializing NVMe Controllers 00:09:39.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:39.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:39.839 Initialization complete. Launching workers. 00:09:39.839 ======================================================== 00:09:39.839 Latency(us) 00:09:39.839 Device Information : IOPS MiB/s Average min max 00:09:39.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18399.20 71.87 3477.99 706.33 17628.04 00:09:39.839 ======================================================== 00:09:39.839 Total : 18399.20 71.87 3477.99 706.33 17628.04 00:09:39.839 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:39.839 rmmod nvme_tcp 00:09:39.839 rmmod nvme_fabrics 00:09:39.839 rmmod nvme_keyring 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2221979 ']' 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2221979 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 2221979 ']' 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 2221979 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2221979 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2221979' 00:09:39.839 killing process with pid 2221979 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 2221979 00:09:39.839 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 2221979 00:09:39.839 nvmf threads initialize successfully 00:09:39.839 bdev subsystem init successfully 00:09:39.839 created a nvmf target service 00:09:39.840 create targets's poll groups done 00:09:39.840 all subsystems of target started 00:09:39.840 nvmf target is running 00:09:39.840 all subsystems of target stopped 00:09:39.840 destroy targets's poll groups done 00:09:39.840 destroyed the nvmf target service 00:09:39.840 bdev subsystem finish successfully 00:09:39.840 nvmf threads destroy successfully 00:09:39.840 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:39.840 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:39.840 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:39.840 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:39.840 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:39.840 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:39.840 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:39.840 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:39.840 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:39.840 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.840 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.840 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.409 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:40.409 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:40.409 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:40.409 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.409 00:09:40.409 real 0m19.931s 00:09:40.409 user 0m46.277s 00:09:40.409 sys 0m6.131s 00:09:40.409 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:40.409 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.409 ************************************ 00:09:40.409 END TEST nvmf_example 00:09:40.409 ************************************ 00:09:40.409 12:52:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:40.409 12:52:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:40.410 12:52:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:40.410 12:52:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:40.410 ************************************ 00:09:40.410 START TEST nvmf_filesystem 00:09:40.410 ************************************ 00:09:40.410 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:40.673 * Looking for test storage... 00:09:40.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:40.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.673 --rc genhtml_branch_coverage=1 00:09:40.673 --rc genhtml_function_coverage=1 00:09:40.673 --rc genhtml_legend=1 00:09:40.673 --rc geninfo_all_blocks=1 00:09:40.673 --rc geninfo_unexecuted_blocks=1 00:09:40.673 00:09:40.673 ' 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:40.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.673 --rc genhtml_branch_coverage=1 00:09:40.673 --rc genhtml_function_coverage=1 00:09:40.673 --rc genhtml_legend=1 00:09:40.673 --rc geninfo_all_blocks=1 00:09:40.673 --rc geninfo_unexecuted_blocks=1 00:09:40.673 00:09:40.673 ' 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:40.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.673 --rc genhtml_branch_coverage=1 00:09:40.673 --rc genhtml_function_coverage=1 00:09:40.673 --rc genhtml_legend=1 00:09:40.673 --rc geninfo_all_blocks=1 00:09:40.673 --rc geninfo_unexecuted_blocks=1 00:09:40.673 00:09:40.673 ' 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:40.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.673 --rc genhtml_branch_coverage=1 00:09:40.673 --rc genhtml_function_coverage=1 00:09:40.673 --rc genhtml_legend=1 00:09:40.673 --rc geninfo_all_blocks=1 00:09:40.673 --rc geninfo_unexecuted_blocks=1 00:09:40.673 00:09:40.673 ' 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:40.673 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:40.674 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:40.674 #define SPDK_CONFIG_H 00:09:40.674 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:40.674 #define SPDK_CONFIG_APPS 1 00:09:40.674 #define SPDK_CONFIG_ARCH native 00:09:40.674 #undef SPDK_CONFIG_ASAN 00:09:40.674 #undef SPDK_CONFIG_AVAHI 00:09:40.674 #undef SPDK_CONFIG_CET 00:09:40.674 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:40.674 #define SPDK_CONFIG_COVERAGE 1 00:09:40.674 #define SPDK_CONFIG_CROSS_PREFIX 00:09:40.674 #undef SPDK_CONFIG_CRYPTO 00:09:40.674 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:40.674 #undef SPDK_CONFIG_CUSTOMOCF 00:09:40.674 #undef SPDK_CONFIG_DAOS 00:09:40.674 #define SPDK_CONFIG_DAOS_DIR 00:09:40.674 #define SPDK_CONFIG_DEBUG 1 00:09:40.674 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:40.674 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:40.674 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:40.674 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:40.674 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:40.674 #undef SPDK_CONFIG_DPDK_UADK 00:09:40.675 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:40.675 #define SPDK_CONFIG_EXAMPLES 1 00:09:40.675 #undef SPDK_CONFIG_FC 00:09:40.675 #define SPDK_CONFIG_FC_PATH 00:09:40.675 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:40.675 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:40.675 #define SPDK_CONFIG_FSDEV 1 00:09:40.675 #undef SPDK_CONFIG_FUSE 00:09:40.675 #undef SPDK_CONFIG_FUZZER 00:09:40.675 #define SPDK_CONFIG_FUZZER_LIB 00:09:40.675 #undef SPDK_CONFIG_GOLANG 00:09:40.675 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:40.675 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:40.675 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:40.675 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:40.675 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:40.675 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:40.675 #undef SPDK_CONFIG_HAVE_LZ4 00:09:40.675 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:40.675 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:40.675 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:40.675 #define SPDK_CONFIG_IDXD 1 00:09:40.675 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:40.675 #undef SPDK_CONFIG_IPSEC_MB 00:09:40.675 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:40.675 #define SPDK_CONFIG_ISAL 1 00:09:40.675 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:40.675 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:40.675 #define SPDK_CONFIG_LIBDIR 00:09:40.675 #undef SPDK_CONFIG_LTO 00:09:40.675 #define SPDK_CONFIG_MAX_LCORES 128 00:09:40.675 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:40.675 #define SPDK_CONFIG_NVME_CUSE 1 00:09:40.675 #undef SPDK_CONFIG_OCF 00:09:40.675 #define SPDK_CONFIG_OCF_PATH 00:09:40.675 #define SPDK_CONFIG_OPENSSL_PATH 00:09:40.675 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:40.675 #define SPDK_CONFIG_PGO_DIR 00:09:40.675 #undef SPDK_CONFIG_PGO_USE 00:09:40.675 #define SPDK_CONFIG_PREFIX /usr/local 00:09:40.675 #undef SPDK_CONFIG_RAID5F 00:09:40.675 #undef SPDK_CONFIG_RBD 00:09:40.675 #define SPDK_CONFIG_RDMA 1 00:09:40.675 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:40.675 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:40.675 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:40.675 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:40.675 #define SPDK_CONFIG_SHARED 1 00:09:40.675 #undef SPDK_CONFIG_SMA 00:09:40.675 #define SPDK_CONFIG_TESTS 1 00:09:40.675 #undef SPDK_CONFIG_TSAN 00:09:40.675 #define SPDK_CONFIG_UBLK 1 00:09:40.675 #define SPDK_CONFIG_UBSAN 1 00:09:40.675 #undef SPDK_CONFIG_UNIT_TESTS 00:09:40.675 #undef SPDK_CONFIG_URING 00:09:40.675 #define SPDK_CONFIG_URING_PATH 00:09:40.675 #undef SPDK_CONFIG_URING_ZNS 00:09:40.675 #undef SPDK_CONFIG_USDT 00:09:40.675 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:40.675 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:40.675 #define SPDK_CONFIG_VFIO_USER 1 00:09:40.675 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:40.675 #define SPDK_CONFIG_VHOST 1 00:09:40.675 #define SPDK_CONFIG_VIRTIO 1 00:09:40.675 #undef SPDK_CONFIG_VTUNE 00:09:40.675 #define SPDK_CONFIG_VTUNE_DIR 00:09:40.675 #define SPDK_CONFIG_WERROR 1 00:09:40.675 #define SPDK_CONFIG_WPDK_DIR 00:09:40.675 #undef SPDK_CONFIG_XNVME 00:09:40.675 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:40.675 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:40.676 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:09:40.677 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2224401 ]] 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2224401 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.K8s6Lz 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.K8s6Lz/tests/target /tmp/spdk.K8s6Lz 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=189141536768 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963981824 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6822445056 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97971957760 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981988864 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169748992 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192797184 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23048192 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97981505536 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981992960 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=487424 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596382208 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596394496 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:09:40.678 * Looking for test storage... 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.678 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=189141536768 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9037037568 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:40.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.940 --rc genhtml_branch_coverage=1 00:09:40.940 --rc genhtml_function_coverage=1 00:09:40.940 --rc genhtml_legend=1 00:09:40.940 --rc geninfo_all_blocks=1 00:09:40.940 --rc geninfo_unexecuted_blocks=1 00:09:40.940 00:09:40.940 ' 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:40.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.940 --rc genhtml_branch_coverage=1 00:09:40.940 --rc genhtml_function_coverage=1 00:09:40.940 --rc genhtml_legend=1 00:09:40.940 --rc geninfo_all_blocks=1 00:09:40.940 --rc geninfo_unexecuted_blocks=1 00:09:40.940 00:09:40.940 ' 00:09:40.940 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:40.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.940 --rc genhtml_branch_coverage=1 00:09:40.940 --rc genhtml_function_coverage=1 00:09:40.940 --rc genhtml_legend=1 00:09:40.940 --rc geninfo_all_blocks=1 00:09:40.940 --rc geninfo_unexecuted_blocks=1 00:09:40.940 00:09:40.940 ' 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:40.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.941 --rc genhtml_branch_coverage=1 00:09:40.941 --rc genhtml_function_coverage=1 00:09:40.941 --rc genhtml_legend=1 00:09:40.941 --rc geninfo_all_blocks=1 00:09:40.941 --rc geninfo_unexecuted_blocks=1 00:09:40.941 00:09:40.941 ' 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:40.941 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:47.517 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:47.517 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:47.517 Found net devices under 0000:86:00.0: cvl_0_0 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:47.517 Found net devices under 0000:86:00.1: cvl_0_1 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.517 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:47.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:09:47.518 00:09:47.518 --- 10.0.0.2 ping statistics --- 00:09:47.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.518 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:09:47.518 00:09:47.518 --- 10.0.0.1 ping statistics --- 00:09:47.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.518 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.518 ************************************ 00:09:47.518 START TEST nvmf_filesystem_no_in_capsule 00:09:47.518 ************************************ 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2227451 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2227451 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2227451 ']' 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.518 [2024-11-18 12:52:44.597591] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:09:47.518 [2024-11-18 12:52:44.597633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.518 [2024-11-18 12:52:44.678145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.518 [2024-11-18 12:52:44.719303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.518 [2024-11-18 12:52:44.719341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.518 [2024-11-18 12:52:44.719349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.518 [2024-11-18 12:52:44.719361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.518 [2024-11-18 12:52:44.719366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.518 [2024-11-18 12:52:44.720929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.518 [2024-11-18 12:52:44.721040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.518 [2024-11-18 12:52:44.721124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.518 [2024-11-18 12:52:44.721125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.518 [2024-11-18 12:52:44.866250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.518 Malloc1 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.518 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.519 [2024-11-18 12:52:45.015331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:09:47.519 { 00:09:47.519 "name": "Malloc1", 00:09:47.519 "aliases": [ 00:09:47.519 "13b333c1-b3df-4dfd-a900-f4216c4e7856" 00:09:47.519 ], 00:09:47.519 "product_name": "Malloc disk", 00:09:47.519 "block_size": 512, 00:09:47.519 "num_blocks": 1048576, 00:09:47.519 "uuid": "13b333c1-b3df-4dfd-a900-f4216c4e7856", 00:09:47.519 "assigned_rate_limits": { 00:09:47.519 "rw_ios_per_sec": 0, 00:09:47.519 "rw_mbytes_per_sec": 0, 00:09:47.519 "r_mbytes_per_sec": 0, 00:09:47.519 "w_mbytes_per_sec": 0 00:09:47.519 }, 00:09:47.519 "claimed": true, 00:09:47.519 "claim_type": "exclusive_write", 00:09:47.519 "zoned": false, 00:09:47.519 "supported_io_types": { 00:09:47.519 "read": true, 00:09:47.519 "write": true, 00:09:47.519 "unmap": true, 00:09:47.519 "flush": true, 00:09:47.519 "reset": true, 00:09:47.519 "nvme_admin": false, 00:09:47.519 "nvme_io": false, 00:09:47.519 "nvme_io_md": false, 00:09:47.519 "write_zeroes": true, 00:09:47.519 "zcopy": true, 00:09:47.519 "get_zone_info": false, 00:09:47.519 "zone_management": false, 00:09:47.519 "zone_append": false, 00:09:47.519 "compare": false, 00:09:47.519 "compare_and_write": false, 00:09:47.519 "abort": true, 00:09:47.519 "seek_hole": false, 00:09:47.519 "seek_data": false, 00:09:47.519 "copy": true, 00:09:47.519 "nvme_iov_md": false 00:09:47.519 }, 00:09:47.519 "memory_domains": [ 00:09:47.519 { 00:09:47.519 "dma_device_id": "system", 00:09:47.519 "dma_device_type": 1 00:09:47.519 }, 00:09:47.519 { 00:09:47.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.519 "dma_device_type": 2 00:09:47.519 } 00:09:47.519 ], 00:09:47.519 "driver_specific": {} 00:09:47.519 } 00:09:47.519 ]' 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:47.519 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:48.895 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:48.895 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:09:48.895 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:48.895 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:48.895 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:50.794 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:51.052 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:51.986 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.921 ************************************ 00:09:52.921 START TEST filesystem_ext4 00:09:52.921 ************************************ 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:09:52.921 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:52.921 mke2fs 1.47.0 (5-Feb-2023) 00:09:52.921 Discarding device blocks: 0/522240 done 00:09:52.921 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:52.921 Filesystem UUID: 9d1e947d-cf15-479f-8126-2bb63dd65610 00:09:52.921 Superblock backups stored on blocks: 00:09:52.921 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:52.921 00:09:52.921 Allocating group tables: 0/64 done 00:09:52.921 Writing inode tables: 0/64 done 00:09:52.921 Creating journal (8192 blocks): done 00:09:53.180 Writing superblocks and filesystem accounting information: 0/64 done 00:09:53.180 00:09:53.180 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:09:53.180 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:58.451 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2227451 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:58.710 00:09:58.710 real 0m5.812s 00:09:58.710 user 0m0.018s 00:09:58.710 sys 0m0.078s 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:58.710 ************************************ 00:09:58.710 END TEST filesystem_ext4 00:09:58.710 ************************************ 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.710 ************************************ 00:09:58.710 START TEST filesystem_btrfs 00:09:58.710 ************************************ 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:09:58.710 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:09:58.711 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:09:58.711 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:09:58.711 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:09:58.711 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:58.969 btrfs-progs v6.8.1 00:09:58.969 See https://btrfs.readthedocs.io for more information. 00:09:58.969 00:09:58.969 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:58.969 NOTE: several default settings have changed in version 5.15, please make sure 00:09:58.969 this does not affect your deployments: 00:09:58.969 - DUP for metadata (-m dup) 00:09:58.969 - enabled no-holes (-O no-holes) 00:09:58.969 - enabled free-space-tree (-R free-space-tree) 00:09:58.969 00:09:58.969 Label: (null) 00:09:58.969 UUID: 1a7ad7d8-ef44-4ee9-90da-0327746250de 00:09:58.969 Node size: 16384 00:09:58.969 Sector size: 4096 (CPU page size: 4096) 00:09:58.969 Filesystem size: 510.00MiB 00:09:58.969 Block group profiles: 00:09:58.969 Data: single 8.00MiB 00:09:58.969 Metadata: DUP 32.00MiB 00:09:58.969 System: DUP 8.00MiB 00:09:58.969 SSD detected: yes 00:09:58.969 Zoned device: no 00:09:58.969 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:58.969 Checksum: crc32c 00:09:58.969 Number of devices: 1 00:09:58.969 Devices: 00:09:58.969 ID SIZE PATH 00:09:58.969 1 510.00MiB /dev/nvme0n1p1 00:09:58.969 00:09:58.969 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:09:58.969 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2227451 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:59.905 00:09:59.905 real 0m1.119s 00:09:59.905 user 0m0.023s 00:09:59.905 sys 0m0.119s 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:59.905 ************************************ 00:09:59.905 END TEST filesystem_btrfs 00:09:59.905 ************************************ 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.905 ************************************ 00:09:59.905 START TEST filesystem_xfs 00:09:59.905 ************************************ 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:59.905 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:59.906 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:09:59.906 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:09:59.906 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:09:59.906 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:09:59.906 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:09:59.906 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:09:59.906 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:00.165 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:00.165 = sectsz=512 attr=2, projid32bit=1 00:10:00.165 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:00.165 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:00.165 data = bsize=4096 blocks=130560, imaxpct=25 00:10:00.165 = sunit=0 swidth=0 blks 00:10:00.165 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:00.165 log =internal log bsize=4096 blocks=16384, version=2 00:10:00.165 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:00.165 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:01.100 Discarding blocks...Done. 00:10:01.100 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:01.100 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:03.631 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:03.631 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:03.631 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:03.631 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:03.631 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:03.631 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:03.631 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2227451 00:10:03.631 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:03.631 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:03.631 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:03.631 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:03.631 00:10:03.631 real 0m3.587s 00:10:03.631 user 0m0.029s 00:10:03.631 sys 0m0.071s 00:10:03.631 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.631 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:03.631 ************************************ 00:10:03.631 END TEST filesystem_xfs 00:10:03.631 ************************************ 00:10:03.631 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:03.889 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:03.889 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.889 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.889 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:03.889 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:03.889 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.889 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.889 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2227451 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2227451 ']' 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2227451 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2227451 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2227451' 00:10:04.148 killing process with pid 2227451 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 2227451 00:10:04.148 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 2227451 00:10:04.408 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:04.408 00:10:04.408 real 0m17.437s 00:10:04.408 user 1m8.616s 00:10:04.408 sys 0m1.427s 00:10:04.408 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.409 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.409 ************************************ 00:10:04.409 END TEST nvmf_filesystem_no_in_capsule 00:10:04.409 ************************************ 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.409 ************************************ 00:10:04.409 START TEST nvmf_filesystem_in_capsule 00:10:04.409 ************************************ 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2230659 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2230659 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2230659 ']' 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:04.409 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.668 [2024-11-18 12:53:02.111455] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:10:04.669 [2024-11-18 12:53:02.111496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.669 [2024-11-18 12:53:02.188227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.669 [2024-11-18 12:53:02.226698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.669 [2024-11-18 12:53:02.226736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.669 [2024-11-18 12:53:02.226743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.669 [2024-11-18 12:53:02.226750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.669 [2024-11-18 12:53:02.226755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.669 [2024-11-18 12:53:02.228228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.669 [2024-11-18 12:53:02.228348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.669 [2024-11-18 12:53:02.228447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.669 [2024-11-18 12:53:02.228447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.669 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:04.669 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:04.669 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.669 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.669 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.928 [2024-11-18 12:53:02.378184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.928 Malloc1 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.928 [2024-11-18 12:53:02.530201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.928 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:04.928 { 00:10:04.928 "name": "Malloc1", 00:10:04.928 "aliases": [ 00:10:04.928 "0fe9eab9-9e9d-4cef-b2b4-66df2186a9f4" 00:10:04.928 ], 00:10:04.928 "product_name": "Malloc disk", 00:10:04.928 "block_size": 512, 00:10:04.928 "num_blocks": 1048576, 00:10:04.928 "uuid": "0fe9eab9-9e9d-4cef-b2b4-66df2186a9f4", 00:10:04.928 "assigned_rate_limits": { 00:10:04.928 "rw_ios_per_sec": 0, 00:10:04.928 "rw_mbytes_per_sec": 0, 00:10:04.928 "r_mbytes_per_sec": 0, 00:10:04.928 "w_mbytes_per_sec": 0 00:10:04.928 }, 00:10:04.928 "claimed": true, 00:10:04.928 "claim_type": "exclusive_write", 00:10:04.928 "zoned": false, 00:10:04.928 "supported_io_types": { 00:10:04.929 "read": true, 00:10:04.929 "write": true, 00:10:04.929 "unmap": true, 00:10:04.929 "flush": true, 00:10:04.929 "reset": true, 00:10:04.929 "nvme_admin": false, 00:10:04.929 "nvme_io": false, 00:10:04.929 "nvme_io_md": false, 00:10:04.929 "write_zeroes": true, 00:10:04.929 "zcopy": true, 00:10:04.929 "get_zone_info": false, 00:10:04.929 "zone_management": false, 00:10:04.929 "zone_append": false, 00:10:04.929 "compare": false, 00:10:04.929 "compare_and_write": false, 00:10:04.929 "abort": true, 00:10:04.929 "seek_hole": false, 00:10:04.929 "seek_data": false, 00:10:04.929 "copy": true, 00:10:04.929 "nvme_iov_md": false 00:10:04.929 }, 00:10:04.929 "memory_domains": [ 00:10:04.929 { 00:10:04.929 "dma_device_id": "system", 00:10:04.929 "dma_device_type": 1 00:10:04.929 }, 00:10:04.929 { 00:10:04.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.929 "dma_device_type": 2 00:10:04.929 } 00:10:04.929 ], 00:10:04.929 "driver_specific": {} 00:10:04.929 } 00:10:04.929 ]' 00:10:04.929 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:04.929 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:04.929 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:05.187 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:05.187 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:05.187 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:05.187 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:05.187 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.563 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:06.563 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:06.563 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:06.563 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:06.563 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:08.464 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:08.464 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:08.464 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:08.464 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:08.464 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:08.464 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:08.464 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:08.465 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:08.465 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:08.465 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:08.465 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:08.465 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:08.465 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:08.465 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:08.465 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:08.465 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:08.465 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:08.723 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:08.982 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.358 ************************************ 00:10:10.358 START TEST filesystem_in_capsule_ext4 00:10:10.358 ************************************ 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:10.358 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:10.358 mke2fs 1.47.0 (5-Feb-2023) 00:10:10.358 Discarding device blocks: 0/522240 done 00:10:10.358 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:10.358 Filesystem UUID: 50e35a34-a8c8-4681-bc1b-fbd1cbee9231 00:10:10.358 Superblock backups stored on blocks: 00:10:10.358 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:10.358 00:10:10.358 Allocating group tables: 0/64 done 00:10:10.358 Writing inode tables: 0/64 done 00:10:10.358 Creating journal (8192 blocks): done 00:10:11.294 Writing superblocks and filesystem accounting information: 0/64 done 00:10:11.294 00:10:11.294 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:11.294 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2230659 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:17.858 00:10:17.858 real 0m7.174s 00:10:17.858 user 0m0.025s 00:10:17.858 sys 0m0.076s 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:17.858 ************************************ 00:10:17.858 END TEST filesystem_in_capsule_ext4 00:10:17.858 ************************************ 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.858 ************************************ 00:10:17.858 START TEST filesystem_in_capsule_btrfs 00:10:17.858 ************************************ 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:17.858 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:17.858 btrfs-progs v6.8.1 00:10:17.859 See https://btrfs.readthedocs.io for more information. 00:10:17.859 00:10:17.859 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:17.859 NOTE: several default settings have changed in version 5.15, please make sure 00:10:17.859 this does not affect your deployments: 00:10:17.859 - DUP for metadata (-m dup) 00:10:17.859 - enabled no-holes (-O no-holes) 00:10:17.859 - enabled free-space-tree (-R free-space-tree) 00:10:17.859 00:10:17.859 Label: (null) 00:10:17.859 UUID: da31e121-4040-4823-9212-40841dece3e3 00:10:17.859 Node size: 16384 00:10:17.859 Sector size: 4096 (CPU page size: 4096) 00:10:17.859 Filesystem size: 510.00MiB 00:10:17.859 Block group profiles: 00:10:17.859 Data: single 8.00MiB 00:10:17.859 Metadata: DUP 32.00MiB 00:10:17.859 System: DUP 8.00MiB 00:10:17.859 SSD detected: yes 00:10:17.859 Zoned device: no 00:10:17.859 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:17.859 Checksum: crc32c 00:10:17.859 Number of devices: 1 00:10:17.859 Devices: 00:10:17.859 ID SIZE PATH 00:10:17.859 1 510.00MiB /dev/nvme0n1p1 00:10:17.859 00:10:17.859 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:17.859 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2230659 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:18.118 00:10:18.118 real 0m0.758s 00:10:18.118 user 0m0.023s 00:10:18.118 sys 0m0.121s 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:18.118 ************************************ 00:10:18.118 END TEST filesystem_in_capsule_btrfs 00:10:18.118 ************************************ 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.118 ************************************ 00:10:18.118 START TEST filesystem_in_capsule_xfs 00:10:18.118 ************************************ 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:18.118 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:18.119 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:18.119 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:18.119 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:18.119 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:10:18.119 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:18.119 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:18.119 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:18.377 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:18.377 = sectsz=512 attr=2, projid32bit=1 00:10:18.377 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:18.378 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:18.378 data = bsize=4096 blocks=130560, imaxpct=25 00:10:18.378 = sunit=0 swidth=0 blks 00:10:18.378 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:18.378 log =internal log bsize=4096 blocks=16384, version=2 00:10:18.378 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:18.378 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:19.312 Discarding blocks...Done. 00:10:19.312 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:19.312 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:21.215 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:21.215 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:21.215 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:21.215 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:21.215 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:21.215 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:21.215 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2230659 00:10:21.215 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:21.215 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:21.215 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:21.215 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:21.215 00:10:21.215 real 0m2.876s 00:10:21.215 user 0m0.021s 00:10:21.215 sys 0m0.078s 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:21.216 ************************************ 00:10:21.216 END TEST filesystem_in_capsule_xfs 00:10:21.216 ************************************ 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2230659 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2230659 ']' 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2230659 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2230659 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2230659' 00:10:21.216 killing process with pid 2230659 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 2230659 00:10:21.216 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 2230659 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:21.784 00:10:21.784 real 0m17.153s 00:10:21.784 user 1m7.469s 00:10:21.784 sys 0m1.436s 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.784 ************************************ 00:10:21.784 END TEST nvmf_filesystem_in_capsule 00:10:21.784 ************************************ 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.784 rmmod nvme_tcp 00:10:21.784 rmmod nvme_fabrics 00:10:21.784 rmmod nvme_keyring 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.784 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.692 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:23.692 00:10:23.692 real 0m43.337s 00:10:23.692 user 2m18.197s 00:10:23.692 sys 0m7.525s 00:10:23.692 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:23.692 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:23.692 ************************************ 00:10:23.692 END TEST nvmf_filesystem 00:10:23.692 ************************************ 00:10:23.952 12:53:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:23.952 12:53:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:23.952 12:53:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:23.953 ************************************ 00:10:23.953 START TEST nvmf_target_discovery 00:10:23.953 ************************************ 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:23.953 * Looking for test storage... 00:10:23.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:23.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.953 --rc genhtml_branch_coverage=1 00:10:23.953 --rc genhtml_function_coverage=1 00:10:23.953 --rc genhtml_legend=1 00:10:23.953 --rc geninfo_all_blocks=1 00:10:23.953 --rc geninfo_unexecuted_blocks=1 00:10:23.953 00:10:23.953 ' 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:23.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.953 --rc genhtml_branch_coverage=1 00:10:23.953 --rc genhtml_function_coverage=1 00:10:23.953 --rc genhtml_legend=1 00:10:23.953 --rc geninfo_all_blocks=1 00:10:23.953 --rc geninfo_unexecuted_blocks=1 00:10:23.953 00:10:23.953 ' 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:23.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.953 --rc genhtml_branch_coverage=1 00:10:23.953 --rc genhtml_function_coverage=1 00:10:23.953 --rc genhtml_legend=1 00:10:23.953 --rc geninfo_all_blocks=1 00:10:23.953 --rc geninfo_unexecuted_blocks=1 00:10:23.953 00:10:23.953 ' 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:23.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.953 --rc genhtml_branch_coverage=1 00:10:23.953 --rc genhtml_function_coverage=1 00:10:23.953 --rc genhtml_legend=1 00:10:23.953 --rc geninfo_all_blocks=1 00:10:23.953 --rc geninfo_unexecuted_blocks=1 00:10:23.953 00:10:23.953 ' 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.953 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.213 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:24.214 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.789 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:30.789 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:30.790 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:30.790 Found net devices under 0000:86:00.0: cvl_0_0 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:30.790 Found net devices under 0000:86:00.1: cvl_0_1 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:30.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:10:30.790 00:10:30.790 --- 10.0.0.2 ping statistics --- 00:10:30.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.790 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:10:30.790 00:10:30.790 --- 10.0.0.1 ping statistics --- 00:10:30.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.790 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2237172 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2237172 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 2237172 ']' 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:30.790 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.790 [2024-11-18 12:53:27.793334] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:10:30.790 [2024-11-18 12:53:27.793384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.790 [2024-11-18 12:53:27.873302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.790 [2024-11-18 12:53:27.914907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.790 [2024-11-18 12:53:27.914947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.790 [2024-11-18 12:53:27.914954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.790 [2024-11-18 12:53:27.914960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.791 [2024-11-18 12:53:27.914964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.791 [2024-11-18 12:53:27.916543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.791 [2024-11-18 12:53:27.916653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.791 [2024-11-18 12:53:27.916760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.791 [2024-11-18 12:53:27.916761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 [2024-11-18 12:53:28.687134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 Null1 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 [2024-11-18 12:53:28.732669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 Null2 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.050 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.309 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.309 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:31.309 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.309 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.309 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.309 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:31.309 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.309 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.309 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.309 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 Null3 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 Null4 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:31.569 00:10:31.569 Discovery Log Number of Records 5, Generation counter 6 00:10:31.569 =====Discovery Log Entry 0====== 00:10:31.569 trtype: tcp 00:10:31.569 adrfam: ipv4 00:10:31.569 subtype: current discovery subsystem 00:10:31.569 treq: not required 00:10:31.569 portid: 0 00:10:31.569 trsvcid: 4420 00:10:31.569 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:31.569 traddr: 10.0.0.2 00:10:31.569 eflags: explicit discovery connections, duplicate discovery information 00:10:31.569 sectype: none 00:10:31.569 =====Discovery Log Entry 1====== 00:10:31.569 trtype: tcp 00:10:31.569 adrfam: ipv4 00:10:31.569 subtype: nvme subsystem 00:10:31.569 treq: not required 00:10:31.569 portid: 0 00:10:31.569 trsvcid: 4420 00:10:31.569 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:31.569 traddr: 10.0.0.2 00:10:31.569 eflags: none 00:10:31.569 sectype: none 00:10:31.569 =====Discovery Log Entry 2====== 00:10:31.569 trtype: tcp 00:10:31.569 adrfam: ipv4 00:10:31.569 subtype: nvme subsystem 00:10:31.569 treq: not required 00:10:31.570 portid: 0 00:10:31.570 trsvcid: 4420 00:10:31.570 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:31.570 traddr: 10.0.0.2 00:10:31.570 eflags: none 00:10:31.570 sectype: none 00:10:31.570 =====Discovery Log Entry 3====== 00:10:31.570 trtype: tcp 00:10:31.570 adrfam: ipv4 00:10:31.570 subtype: nvme subsystem 00:10:31.570 treq: not required 00:10:31.570 portid: 0 00:10:31.570 trsvcid: 4420 00:10:31.570 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:31.570 traddr: 10.0.0.2 00:10:31.570 eflags: none 00:10:31.570 sectype: none 00:10:31.570 =====Discovery Log Entry 4====== 00:10:31.570 trtype: tcp 00:10:31.570 adrfam: ipv4 00:10:31.570 subtype: nvme subsystem 00:10:31.570 treq: not required 00:10:31.570 portid: 0 00:10:31.570 trsvcid: 4420 00:10:31.570 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:31.570 traddr: 10.0.0.2 00:10:31.570 eflags: none 00:10:31.570 sectype: none 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:31.570 Perform nvmf subsystem discovery via RPC 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.570 [ 00:10:31.570 { 00:10:31.570 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:31.570 "subtype": "Discovery", 00:10:31.570 "listen_addresses": [ 00:10:31.570 { 00:10:31.570 "trtype": "TCP", 00:10:31.570 "adrfam": "IPv4", 00:10:31.570 "traddr": "10.0.0.2", 00:10:31.570 "trsvcid": "4420" 00:10:31.570 } 00:10:31.570 ], 00:10:31.570 "allow_any_host": true, 00:10:31.570 "hosts": [] 00:10:31.570 }, 00:10:31.570 { 00:10:31.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:31.570 "subtype": "NVMe", 00:10:31.570 "listen_addresses": [ 00:10:31.570 { 00:10:31.570 "trtype": "TCP", 00:10:31.570 "adrfam": "IPv4", 00:10:31.570 "traddr": "10.0.0.2", 00:10:31.570 "trsvcid": "4420" 00:10:31.570 } 00:10:31.570 ], 00:10:31.570 "allow_any_host": true, 00:10:31.570 "hosts": [], 00:10:31.570 "serial_number": "SPDK00000000000001", 00:10:31.570 "model_number": "SPDK bdev Controller", 00:10:31.570 "max_namespaces": 32, 00:10:31.570 "min_cntlid": 1, 00:10:31.570 "max_cntlid": 65519, 00:10:31.570 "namespaces": [ 00:10:31.570 { 00:10:31.570 "nsid": 1, 00:10:31.570 "bdev_name": "Null1", 00:10:31.570 "name": "Null1", 00:10:31.570 "nguid": "E02280EECB3345648D118313D9FC9462", 00:10:31.570 "uuid": "e02280ee-cb33-4564-8d11-8313d9fc9462" 00:10:31.570 } 00:10:31.570 ] 00:10:31.570 }, 00:10:31.570 { 00:10:31.570 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:31.570 "subtype": "NVMe", 00:10:31.570 "listen_addresses": [ 00:10:31.570 { 00:10:31.570 "trtype": "TCP", 00:10:31.570 "adrfam": "IPv4", 00:10:31.570 "traddr": "10.0.0.2", 00:10:31.570 "trsvcid": "4420" 00:10:31.570 } 00:10:31.570 ], 00:10:31.570 "allow_any_host": true, 00:10:31.570 "hosts": [], 00:10:31.570 "serial_number": "SPDK00000000000002", 00:10:31.570 "model_number": "SPDK bdev Controller", 00:10:31.570 "max_namespaces": 32, 00:10:31.570 "min_cntlid": 1, 00:10:31.570 "max_cntlid": 65519, 00:10:31.570 "namespaces": [ 00:10:31.570 { 00:10:31.570 "nsid": 1, 00:10:31.570 "bdev_name": "Null2", 00:10:31.570 "name": "Null2", 00:10:31.570 "nguid": "119F8372758148FC96ACAF4538715C63", 00:10:31.570 "uuid": "119f8372-7581-48fc-96ac-af4538715c63" 00:10:31.570 } 00:10:31.570 ] 00:10:31.570 }, 00:10:31.570 { 00:10:31.570 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:31.570 "subtype": "NVMe", 00:10:31.570 "listen_addresses": [ 00:10:31.570 { 00:10:31.570 "trtype": "TCP", 00:10:31.570 "adrfam": "IPv4", 00:10:31.570 "traddr": "10.0.0.2", 00:10:31.570 "trsvcid": "4420" 00:10:31.570 } 00:10:31.570 ], 00:10:31.570 "allow_any_host": true, 00:10:31.570 "hosts": [], 00:10:31.570 "serial_number": "SPDK00000000000003", 00:10:31.570 "model_number": "SPDK bdev Controller", 00:10:31.570 "max_namespaces": 32, 00:10:31.570 "min_cntlid": 1, 00:10:31.570 "max_cntlid": 65519, 00:10:31.570 "namespaces": [ 00:10:31.570 { 00:10:31.570 "nsid": 1, 00:10:31.570 "bdev_name": "Null3", 00:10:31.570 "name": "Null3", 00:10:31.570 "nguid": "5B7BB0936DBB45BAA59050F2FBA9B72C", 00:10:31.570 "uuid": "5b7bb093-6dbb-45ba-a590-50f2fba9b72c" 00:10:31.570 } 00:10:31.570 ] 00:10:31.570 }, 00:10:31.570 { 00:10:31.570 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:31.570 "subtype": "NVMe", 00:10:31.570 "listen_addresses": [ 00:10:31.570 { 00:10:31.570 "trtype": "TCP", 00:10:31.570 "adrfam": "IPv4", 00:10:31.570 "traddr": "10.0.0.2", 00:10:31.570 "trsvcid": "4420" 00:10:31.570 } 00:10:31.570 ], 00:10:31.570 "allow_any_host": true, 00:10:31.570 "hosts": [], 00:10:31.570 "serial_number": "SPDK00000000000004", 00:10:31.570 "model_number": "SPDK bdev Controller", 00:10:31.570 "max_namespaces": 32, 00:10:31.570 "min_cntlid": 1, 00:10:31.570 "max_cntlid": 65519, 00:10:31.570 "namespaces": [ 00:10:31.570 { 00:10:31.570 "nsid": 1, 00:10:31.570 "bdev_name": "Null4", 00:10:31.570 "name": "Null4", 00:10:31.570 "nguid": "25A9E56AFE74421FB2D57A39EB4159D2", 00:10:31.570 "uuid": "25a9e56a-fe74-421f-b2d5-7a39eb4159d2" 00:10:31.570 } 00:10:31.570 ] 00:10:31.570 } 00:10:31.570 ] 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.570 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.571 rmmod nvme_tcp 00:10:31.571 rmmod nvme_fabrics 00:10:31.571 rmmod nvme_keyring 00:10:31.571 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.830 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:31.830 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:31.830 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2237172 ']' 00:10:31.830 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2237172 00:10:31.830 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 2237172 ']' 00:10:31.830 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 2237172 00:10:31.830 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:10:31.830 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:31.830 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2237172 00:10:31.830 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:31.830 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:31.830 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2237172' 00:10:31.830 killing process with pid 2237172 00:10:31.830 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 2237172 00:10:31.831 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 2237172 00:10:31.831 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:31.831 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:31.831 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:31.831 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:31.831 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:31.831 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:31.831 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:31.831 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.831 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.831 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.831 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.831 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.369 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:34.369 00:10:34.369 real 0m10.096s 00:10:34.369 user 0m8.301s 00:10:34.369 sys 0m4.988s 00:10:34.369 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.370 ************************************ 00:10:34.370 END TEST nvmf_target_discovery 00:10:34.370 ************************************ 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:34.370 ************************************ 00:10:34.370 START TEST nvmf_referrals 00:10:34.370 ************************************ 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:34.370 * Looking for test storage... 00:10:34.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:34.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.370 --rc genhtml_branch_coverage=1 00:10:34.370 --rc genhtml_function_coverage=1 00:10:34.370 --rc genhtml_legend=1 00:10:34.370 --rc geninfo_all_blocks=1 00:10:34.370 --rc geninfo_unexecuted_blocks=1 00:10:34.370 00:10:34.370 ' 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:34.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.370 --rc genhtml_branch_coverage=1 00:10:34.370 --rc genhtml_function_coverage=1 00:10:34.370 --rc genhtml_legend=1 00:10:34.370 --rc geninfo_all_blocks=1 00:10:34.370 --rc geninfo_unexecuted_blocks=1 00:10:34.370 00:10:34.370 ' 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:34.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.370 --rc genhtml_branch_coverage=1 00:10:34.370 --rc genhtml_function_coverage=1 00:10:34.370 --rc genhtml_legend=1 00:10:34.370 --rc geninfo_all_blocks=1 00:10:34.370 --rc geninfo_unexecuted_blocks=1 00:10:34.370 00:10:34.370 ' 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:34.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.370 --rc genhtml_branch_coverage=1 00:10:34.370 --rc genhtml_function_coverage=1 00:10:34.370 --rc genhtml_legend=1 00:10:34.370 --rc geninfo_all_blocks=1 00:10:34.370 --rc geninfo_unexecuted_blocks=1 00:10:34.370 00:10:34.370 ' 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.370 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:34.371 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:40.948 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:40.948 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.948 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:40.949 Found net devices under 0000:86:00.0: cvl_0_0 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:40.949 Found net devices under 0000:86:00.1: cvl_0_1 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:10:40.949 00:10:40.949 --- 10.0.0.2 ping statistics --- 00:10:40.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.949 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:10:40.949 00:10:40.949 --- 10.0.0.1 ping statistics --- 00:10:40.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.949 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2240965 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2240965 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 2240965 ']' 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:40.949 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.949 [2024-11-18 12:53:37.911289] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:10:40.949 [2024-11-18 12:53:37.911337] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.949 [2024-11-18 12:53:37.991759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.949 [2024-11-18 12:53:38.034208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.949 [2024-11-18 12:53:38.034245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.949 [2024-11-18 12:53:38.034252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.949 [2024-11-18 12:53:38.034259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.949 [2024-11-18 12:53:38.034264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.949 [2024-11-18 12:53:38.035832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.949 [2024-11-18 12:53:38.035945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.949 [2024-11-18 12:53:38.035978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.949 [2024-11-18 12:53:38.035978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.949 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:40.949 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:10:40.949 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.949 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.949 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.949 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.949 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.949 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.949 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.949 [2024-11-18 12:53:38.172359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.949 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.949 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:40.949 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.949 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.949 [2024-11-18 12:53:38.185732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:40.949 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -ah 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 -ah 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 -ah 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:40.950 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery -ah 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 -ah 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:41.210 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:41.470 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:41.470 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:41.470 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:41.470 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:41.470 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:41.470 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:41.470 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:41.470 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:41.470 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:41.470 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:41.470 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:41.470 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:41.470 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:41.738 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:41.738 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:41.738 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.738 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.738 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.738 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:41.738 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:41.738 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:41.738 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:41.738 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.738 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:41.738 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:41.739 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.739 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:41.739 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:41.739 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:41.739 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:41.739 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:41.739 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:41.739 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:41.739 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:42.000 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:42.000 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:42.000 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:42.000 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:42.000 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:42.000 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:42.000 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:42.260 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:42.520 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:42.520 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:42.520 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:42.520 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:42.520 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:42.520 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:42.520 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:42.520 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:42.520 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:42.520 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:42.520 rmmod nvme_tcp 00:10:42.520 rmmod nvme_fabrics 00:10:42.520 rmmod nvme_keyring 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2240965 ']' 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2240965 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 2240965 ']' 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 2240965 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2240965 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2240965' 00:10:42.813 killing process with pid 2240965 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 2240965 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 2240965 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.813 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.356 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:45.356 00:10:45.356 real 0m10.902s 00:10:45.356 user 0m12.232s 00:10:45.356 sys 0m5.320s 00:10:45.356 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:45.356 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:45.356 ************************************ 00:10:45.356 END TEST nvmf_referrals 00:10:45.356 ************************************ 00:10:45.356 12:53:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:45.356 12:53:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:45.356 12:53:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:45.356 12:53:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:45.356 ************************************ 00:10:45.356 START TEST nvmf_connect_disconnect 00:10:45.356 ************************************ 00:10:45.356 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:45.356 * Looking for test storage... 00:10:45.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.356 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:45.356 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:10:45.356 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:45.356 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:45.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.357 --rc genhtml_branch_coverage=1 00:10:45.357 --rc genhtml_function_coverage=1 00:10:45.357 --rc genhtml_legend=1 00:10:45.357 --rc geninfo_all_blocks=1 00:10:45.357 --rc geninfo_unexecuted_blocks=1 00:10:45.357 00:10:45.357 ' 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:45.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.357 --rc genhtml_branch_coverage=1 00:10:45.357 --rc genhtml_function_coverage=1 00:10:45.357 --rc genhtml_legend=1 00:10:45.357 --rc geninfo_all_blocks=1 00:10:45.357 --rc geninfo_unexecuted_blocks=1 00:10:45.357 00:10:45.357 ' 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:45.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.357 --rc genhtml_branch_coverage=1 00:10:45.357 --rc genhtml_function_coverage=1 00:10:45.357 --rc genhtml_legend=1 00:10:45.357 --rc geninfo_all_blocks=1 00:10:45.357 --rc geninfo_unexecuted_blocks=1 00:10:45.357 00:10:45.357 ' 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:45.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.357 --rc genhtml_branch_coverage=1 00:10:45.357 --rc genhtml_function_coverage=1 00:10:45.357 --rc genhtml_legend=1 00:10:45.357 --rc geninfo_all_blocks=1 00:10:45.357 --rc geninfo_unexecuted_blocks=1 00:10:45.357 00:10:45.357 ' 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.357 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:45.358 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:45.358 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:45.358 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.358 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:45.358 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:45.358 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:45.358 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.358 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.358 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.358 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:45.358 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:45.358 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:45.358 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:51.934 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:51.934 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:51.934 Found net devices under 0000:86:00.0: cvl_0_0 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:51.934 Found net devices under 0000:86:00.1: cvl_0_1 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:51.934 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:51.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:10:51.935 00:10:51.935 --- 10.0.0.2 ping statistics --- 00:10:51.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.935 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:10:51.935 00:10:51.935 --- 10.0.0.1 ping statistics --- 00:10:51.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.935 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2245040 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2245040 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 2245040 ']' 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:51.935 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.935 [2024-11-18 12:53:48.859372] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:10:51.935 [2024-11-18 12:53:48.859419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.935 [2024-11-18 12:53:48.937907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.935 [2024-11-18 12:53:48.980727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.935 [2024-11-18 12:53:48.980765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.935 [2024-11-18 12:53:48.980772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.935 [2024-11-18 12:53:48.980778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.935 [2024-11-18 12:53:48.980783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.935 [2024-11-18 12:53:48.982338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.935 [2024-11-18 12:53:48.982453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.935 [2024-11-18 12:53:48.982487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.935 [2024-11-18 12:53:48.982488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.935 [2024-11-18 12:53:49.124093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:51.935 [2024-11-18 12:53:49.191803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:51.935 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:55.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.357 rmmod nvme_tcp 00:11:08.357 rmmod nvme_fabrics 00:11:08.357 rmmod nvme_keyring 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2245040 ']' 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2245040 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2245040 ']' 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 2245040 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2245040 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2245040' 00:11:08.357 killing process with pid 2245040 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 2245040 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 2245040 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:08.357 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:08.358 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.358 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.358 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.358 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.358 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.358 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.358 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.266 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:10.266 00:11:10.266 real 0m25.353s 00:11:10.266 user 1m8.781s 00:11:10.266 sys 0m5.878s 00:11:10.266 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.266 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:10.266 ************************************ 00:11:10.266 END TEST nvmf_connect_disconnect 00:11:10.266 ************************************ 00:11:10.527 12:54:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:10.527 12:54:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:10.527 12:54:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.527 12:54:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:10.527 ************************************ 00:11:10.527 START TEST nvmf_multitarget 00:11:10.527 ************************************ 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:10.527 * Looking for test storage... 00:11:10.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:10.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.527 --rc genhtml_branch_coverage=1 00:11:10.527 --rc genhtml_function_coverage=1 00:11:10.527 --rc genhtml_legend=1 00:11:10.527 --rc geninfo_all_blocks=1 00:11:10.527 --rc geninfo_unexecuted_blocks=1 00:11:10.527 00:11:10.527 ' 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:10.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.527 --rc genhtml_branch_coverage=1 00:11:10.527 --rc genhtml_function_coverage=1 00:11:10.527 --rc genhtml_legend=1 00:11:10.527 --rc geninfo_all_blocks=1 00:11:10.527 --rc geninfo_unexecuted_blocks=1 00:11:10.527 00:11:10.527 ' 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:10.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.527 --rc genhtml_branch_coverage=1 00:11:10.527 --rc genhtml_function_coverage=1 00:11:10.527 --rc genhtml_legend=1 00:11:10.527 --rc geninfo_all_blocks=1 00:11:10.527 --rc geninfo_unexecuted_blocks=1 00:11:10.527 00:11:10.527 ' 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:10.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.527 --rc genhtml_branch_coverage=1 00:11:10.527 --rc genhtml_function_coverage=1 00:11:10.527 --rc genhtml_legend=1 00:11:10.527 --rc geninfo_all_blocks=1 00:11:10.527 --rc geninfo_unexecuted_blocks=1 00:11:10.527 00:11:10.527 ' 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.527 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:10.788 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.789 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:10.789 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:10.789 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:10.789 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.789 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.789 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.789 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:10.789 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:10.789 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:10.789 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:17.371 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:17.371 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:17.371 Found net devices under 0000:86:00.0: cvl_0_0 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:17.371 Found net devices under 0000:86:00.1: cvl_0_1 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.371 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.371 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.371 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.371 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.371 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.371 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.371 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.371 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.371 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:11:17.371 00:11:17.371 --- 10.0.0.2 ping statistics --- 00:11:17.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.371 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:11:17.371 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:11:17.371 00:11:17.372 --- 10.0.0.1 ping statistics --- 00:11:17.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.372 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2251851 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2251851 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 2251851 ']' 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:17.372 [2024-11-18 12:54:14.297551] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:11:17.372 [2024-11-18 12:54:14.297593] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.372 [2024-11-18 12:54:14.377097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.372 [2024-11-18 12:54:14.418639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.372 [2024-11-18 12:54:14.418676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.372 [2024-11-18 12:54:14.418683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.372 [2024-11-18 12:54:14.418689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.372 [2024-11-18 12:54:14.418694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.372 [2024-11-18 12:54:14.420251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.372 [2024-11-18 12:54:14.420389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.372 [2024-11-18 12:54:14.420447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.372 [2024-11-18 12:54:14.420447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:17.372 "nvmf_tgt_1" 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:17.372 "nvmf_tgt_2" 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:17.372 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:17.636 true 00:11:17.636 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:17.636 true 00:11:17.636 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:17.636 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:17.636 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:17.636 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:17.636 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:17.636 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.636 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:17.636 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.636 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:17.636 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.636 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.636 rmmod nvme_tcp 00:11:17.895 rmmod nvme_fabrics 00:11:17.895 rmmod nvme_keyring 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2251851 ']' 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2251851 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 2251851 ']' 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 2251851 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2251851 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2251851' 00:11:17.895 killing process with pid 2251851 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 2251851 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 2251851 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:17.895 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:18.155 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:18.155 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:18.155 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.155 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.155 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.063 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:20.063 00:11:20.063 real 0m9.631s 00:11:20.063 user 0m7.162s 00:11:20.063 sys 0m4.952s 00:11:20.063 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:20.063 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:20.063 ************************************ 00:11:20.063 END TEST nvmf_multitarget 00:11:20.063 ************************************ 00:11:20.063 12:54:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:20.063 12:54:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:20.063 12:54:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:20.063 12:54:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.063 ************************************ 00:11:20.063 START TEST nvmf_rpc 00:11:20.063 ************************************ 00:11:20.063 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:20.323 * Looking for test storage... 00:11:20.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:20.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.323 --rc genhtml_branch_coverage=1 00:11:20.323 --rc genhtml_function_coverage=1 00:11:20.323 --rc genhtml_legend=1 00:11:20.323 --rc geninfo_all_blocks=1 00:11:20.323 --rc geninfo_unexecuted_blocks=1 00:11:20.323 00:11:20.323 ' 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:20.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.323 --rc genhtml_branch_coverage=1 00:11:20.323 --rc genhtml_function_coverage=1 00:11:20.323 --rc genhtml_legend=1 00:11:20.323 --rc geninfo_all_blocks=1 00:11:20.323 --rc geninfo_unexecuted_blocks=1 00:11:20.323 00:11:20.323 ' 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:20.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.323 --rc genhtml_branch_coverage=1 00:11:20.323 --rc genhtml_function_coverage=1 00:11:20.323 --rc genhtml_legend=1 00:11:20.323 --rc geninfo_all_blocks=1 00:11:20.323 --rc geninfo_unexecuted_blocks=1 00:11:20.323 00:11:20.323 ' 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:20.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.323 --rc genhtml_branch_coverage=1 00:11:20.323 --rc genhtml_function_coverage=1 00:11:20.323 --rc genhtml_legend=1 00:11:20.323 --rc geninfo_all_blocks=1 00:11:20.323 --rc geninfo_unexecuted_blocks=1 00:11:20.323 00:11:20.323 ' 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.323 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:20.324 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:26.899 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:26.899 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:26.899 Found net devices under 0000:86:00.0: cvl_0_0 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.899 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:26.900 Found net devices under 0000:86:00.1: cvl_0_1 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:11:26.900 00:11:26.900 --- 10.0.0.2 ping statistics --- 00:11:26.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.900 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:11:26.900 00:11:26.900 --- 10.0.0.1 ping statistics --- 00:11:26.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.900 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2255520 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2255520 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 2255520 ']' 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:26.900 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.900 [2024-11-18 12:54:23.967868] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:11:26.900 [2024-11-18 12:54:23.967921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.900 [2024-11-18 12:54:24.047865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.900 [2024-11-18 12:54:24.090600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.900 [2024-11-18 12:54:24.090637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.900 [2024-11-18 12:54:24.090644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.900 [2024-11-18 12:54:24.090650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.900 [2024-11-18 12:54:24.090655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.900 [2024-11-18 12:54:24.092230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.900 [2024-11-18 12:54:24.092337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.900 [2024-11-18 12:54:24.092436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.900 [2024-11-18 12:54:24.092437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.900 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:26.900 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:26.900 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.900 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:26.900 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.900 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.900 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:26.900 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.900 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.900 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.900 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:26.900 "tick_rate": 2300000000, 00:11:26.900 "poll_groups": [ 00:11:26.900 { 00:11:26.900 "name": "nvmf_tgt_poll_group_000", 00:11:26.900 "admin_qpairs": 0, 00:11:26.900 "io_qpairs": 0, 00:11:26.900 "current_admin_qpairs": 0, 00:11:26.900 "current_io_qpairs": 0, 00:11:26.900 "pending_bdev_io": 0, 00:11:26.900 "completed_nvme_io": 0, 00:11:26.900 "transports": [] 00:11:26.900 }, 00:11:26.900 { 00:11:26.900 "name": "nvmf_tgt_poll_group_001", 00:11:26.900 "admin_qpairs": 0, 00:11:26.900 "io_qpairs": 0, 00:11:26.900 "current_admin_qpairs": 0, 00:11:26.900 "current_io_qpairs": 0, 00:11:26.900 "pending_bdev_io": 0, 00:11:26.900 "completed_nvme_io": 0, 00:11:26.900 "transports": [] 00:11:26.900 }, 00:11:26.900 { 00:11:26.900 "name": "nvmf_tgt_poll_group_002", 00:11:26.900 "admin_qpairs": 0, 00:11:26.900 "io_qpairs": 0, 00:11:26.900 "current_admin_qpairs": 0, 00:11:26.900 "current_io_qpairs": 0, 00:11:26.900 "pending_bdev_io": 0, 00:11:26.900 "completed_nvme_io": 0, 00:11:26.900 "transports": [] 00:11:26.900 }, 00:11:26.900 { 00:11:26.900 "name": "nvmf_tgt_poll_group_003", 00:11:26.900 "admin_qpairs": 0, 00:11:26.900 "io_qpairs": 0, 00:11:26.900 "current_admin_qpairs": 0, 00:11:26.900 "current_io_qpairs": 0, 00:11:26.900 "pending_bdev_io": 0, 00:11:26.900 "completed_nvme_io": 0, 00:11:26.900 "transports": [] 00:11:26.900 } 00:11:26.900 ] 00:11:26.900 }' 00:11:26.900 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:26.900 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.901 [2024-11-18 12:54:24.346545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:26.901 "tick_rate": 2300000000, 00:11:26.901 "poll_groups": [ 00:11:26.901 { 00:11:26.901 "name": "nvmf_tgt_poll_group_000", 00:11:26.901 "admin_qpairs": 0, 00:11:26.901 "io_qpairs": 0, 00:11:26.901 "current_admin_qpairs": 0, 00:11:26.901 "current_io_qpairs": 0, 00:11:26.901 "pending_bdev_io": 0, 00:11:26.901 "completed_nvme_io": 0, 00:11:26.901 "transports": [ 00:11:26.901 { 00:11:26.901 "trtype": "TCP" 00:11:26.901 } 00:11:26.901 ] 00:11:26.901 }, 00:11:26.901 { 00:11:26.901 "name": "nvmf_tgt_poll_group_001", 00:11:26.901 "admin_qpairs": 0, 00:11:26.901 "io_qpairs": 0, 00:11:26.901 "current_admin_qpairs": 0, 00:11:26.901 "current_io_qpairs": 0, 00:11:26.901 "pending_bdev_io": 0, 00:11:26.901 "completed_nvme_io": 0, 00:11:26.901 "transports": [ 00:11:26.901 { 00:11:26.901 "trtype": "TCP" 00:11:26.901 } 00:11:26.901 ] 00:11:26.901 }, 00:11:26.901 { 00:11:26.901 "name": "nvmf_tgt_poll_group_002", 00:11:26.901 "admin_qpairs": 0, 00:11:26.901 "io_qpairs": 0, 00:11:26.901 "current_admin_qpairs": 0, 00:11:26.901 "current_io_qpairs": 0, 00:11:26.901 "pending_bdev_io": 0, 00:11:26.901 "completed_nvme_io": 0, 00:11:26.901 "transports": [ 00:11:26.901 { 00:11:26.901 "trtype": "TCP" 00:11:26.901 } 00:11:26.901 ] 00:11:26.901 }, 00:11:26.901 { 00:11:26.901 "name": "nvmf_tgt_poll_group_003", 00:11:26.901 "admin_qpairs": 0, 00:11:26.901 "io_qpairs": 0, 00:11:26.901 "current_admin_qpairs": 0, 00:11:26.901 "current_io_qpairs": 0, 00:11:26.901 "pending_bdev_io": 0, 00:11:26.901 "completed_nvme_io": 0, 00:11:26.901 "transports": [ 00:11:26.901 { 00:11:26.901 "trtype": "TCP" 00:11:26.901 } 00:11:26.901 ] 00:11:26.901 } 00:11:26.901 ] 00:11:26.901 }' 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.901 Malloc1 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.901 [2024-11-18 12:54:24.527840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:26.901 [2024-11-18 12:54:24.556420] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:26.901 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:26.901 could not add new controller: failed to write to nvme-fabrics device 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.901 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:28.283 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:28.283 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:28.283 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.283 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:28.283 12:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.193 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:30.194 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.194 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:30.194 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:30.194 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:30.194 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:30.194 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:30.194 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:30.194 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:30.194 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:30.194 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.194 [2024-11-18 12:54:27.879534] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:30.454 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:30.454 could not add new controller: failed to write to nvme-fabrics device 00:11:30.454 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:30.454 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:30.454 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:30.454 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:30.454 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:30.454 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.454 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.454 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.454 12:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:31.393 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:31.652 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:31.652 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.652 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:31.652 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:33.557 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:33.557 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.557 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:33.557 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:33.557 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.557 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:33.557 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.557 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.557 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:33.557 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:33.557 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.557 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:33.557 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.817 [2024-11-18 12:54:31.297515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.817 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:35.196 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:35.196 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:35.196 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:35.196 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:35.197 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.104 [2024-11-18 12:54:34.639145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.104 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:38.482 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:38.483 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:38.483 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:38.483 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:38.483 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.399 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.399 [2024-11-18 12:54:38.000048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.399 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.399 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:40.399 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.399 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.399 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.399 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:40.399 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.399 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.399 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.399 12:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.776 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:41.776 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:41.776 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.776 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:41.776 12:54:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:43.681 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:43.681 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:43.681 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.681 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:43.681 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.681 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:43.681 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.682 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.941 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.941 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.941 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.941 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.941 [2024-11-18 12:54:41.388046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.941 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.941 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:43.941 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.941 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.941 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.941 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:43.941 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.941 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.941 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.941 12:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.320 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.320 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:45.320 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.320 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:45.320 12:54:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.229 [2024-11-18 12:54:44.764269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.229 12:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.611 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.611 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:48.611 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.611 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:48.611 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:50.521 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:50.521 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:50.521 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.521 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:50.521 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.521 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:50.521 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.521 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.521 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:50.521 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:50.521 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.521 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:50.521 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.521 [2024-11-18 12:54:48.046321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.521 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 [2024-11-18 12:54:48.094381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 [2024-11-18 12:54:48.142515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 [2024-11-18 12:54:48.190691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.522 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.783 [2024-11-18 12:54:48.238857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:50.783 "tick_rate": 2300000000, 00:11:50.783 "poll_groups": [ 00:11:50.783 { 00:11:50.783 "name": "nvmf_tgt_poll_group_000", 00:11:50.783 "admin_qpairs": 2, 00:11:50.783 "io_qpairs": 168, 00:11:50.783 "current_admin_qpairs": 0, 00:11:50.783 "current_io_qpairs": 0, 00:11:50.783 "pending_bdev_io": 0, 00:11:50.783 "completed_nvme_io": 217, 00:11:50.783 "transports": [ 00:11:50.783 { 00:11:50.783 "trtype": "TCP" 00:11:50.783 } 00:11:50.783 ] 00:11:50.783 }, 00:11:50.783 { 00:11:50.783 "name": "nvmf_tgt_poll_group_001", 00:11:50.783 "admin_qpairs": 2, 00:11:50.783 "io_qpairs": 168, 00:11:50.783 "current_admin_qpairs": 0, 00:11:50.783 "current_io_qpairs": 0, 00:11:50.783 "pending_bdev_io": 0, 00:11:50.783 "completed_nvme_io": 267, 00:11:50.783 "transports": [ 00:11:50.783 { 00:11:50.783 "trtype": "TCP" 00:11:50.783 } 00:11:50.783 ] 00:11:50.783 }, 00:11:50.783 { 00:11:50.783 "name": "nvmf_tgt_poll_group_002", 00:11:50.783 "admin_qpairs": 1, 00:11:50.783 "io_qpairs": 168, 00:11:50.783 "current_admin_qpairs": 0, 00:11:50.783 "current_io_qpairs": 0, 00:11:50.783 "pending_bdev_io": 0, 00:11:50.783 "completed_nvme_io": 307, 00:11:50.783 "transports": [ 00:11:50.783 { 00:11:50.783 "trtype": "TCP" 00:11:50.783 } 00:11:50.783 ] 00:11:50.783 }, 00:11:50.783 { 00:11:50.783 "name": "nvmf_tgt_poll_group_003", 00:11:50.783 "admin_qpairs": 2, 00:11:50.783 "io_qpairs": 168, 00:11:50.783 "current_admin_qpairs": 0, 00:11:50.783 "current_io_qpairs": 0, 00:11:50.783 "pending_bdev_io": 0, 00:11:50.783 "completed_nvme_io": 231, 00:11:50.783 "transports": [ 00:11:50.783 { 00:11:50.783 "trtype": "TCP" 00:11:50.783 } 00:11:50.783 ] 00:11:50.783 } 00:11:50.783 ] 00:11:50.783 }' 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:50.783 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:50.784 rmmod nvme_tcp 00:11:50.784 rmmod nvme_fabrics 00:11:50.784 rmmod nvme_keyring 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2255520 ']' 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2255520 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 2255520 ']' 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 2255520 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:50.784 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2255520 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2255520' 00:11:51.044 killing process with pid 2255520 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 2255520 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 2255520 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.044 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.586 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:53.586 00:11:53.586 real 0m33.025s 00:11:53.586 user 1m39.655s 00:11:53.586 sys 0m6.563s 00:11:53.586 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:53.586 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.586 ************************************ 00:11:53.586 END TEST nvmf_rpc 00:11:53.586 ************************************ 00:11:53.586 12:54:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:53.586 12:54:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:53.586 12:54:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:53.586 12:54:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:53.586 ************************************ 00:11:53.586 START TEST nvmf_invalid 00:11:53.586 ************************************ 00:11:53.586 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:53.586 * Looking for test storage... 00:11:53.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.586 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:53.586 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:11:53.586 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.586 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:53.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.587 --rc genhtml_branch_coverage=1 00:11:53.587 --rc genhtml_function_coverage=1 00:11:53.587 --rc genhtml_legend=1 00:11:53.587 --rc geninfo_all_blocks=1 00:11:53.587 --rc geninfo_unexecuted_blocks=1 00:11:53.587 00:11:53.587 ' 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:53.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.587 --rc genhtml_branch_coverage=1 00:11:53.587 --rc genhtml_function_coverage=1 00:11:53.587 --rc genhtml_legend=1 00:11:53.587 --rc geninfo_all_blocks=1 00:11:53.587 --rc geninfo_unexecuted_blocks=1 00:11:53.587 00:11:53.587 ' 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:53.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.587 --rc genhtml_branch_coverage=1 00:11:53.587 --rc genhtml_function_coverage=1 00:11:53.587 --rc genhtml_legend=1 00:11:53.587 --rc geninfo_all_blocks=1 00:11:53.587 --rc geninfo_unexecuted_blocks=1 00:11:53.587 00:11:53.587 ' 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:53.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.587 --rc genhtml_branch_coverage=1 00:11:53.587 --rc genhtml_function_coverage=1 00:11:53.587 --rc genhtml_legend=1 00:11:53.587 --rc geninfo_all_blocks=1 00:11:53.587 --rc geninfo_unexecuted_blocks=1 00:11:53.587 00:11:53.587 ' 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:53.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:53.587 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:00.168 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:00.168 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.168 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:00.169 Found net devices under 0000:86:00.0: cvl_0_0 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:00.169 Found net devices under 0000:86:00.1: cvl_0_1 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:00.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:12:00.169 00:12:00.169 --- 10.0.0.2 ping statistics --- 00:12:00.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.169 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:12:00.169 00:12:00.169 --- 10.0.0.1 ping statistics --- 00:12:00.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.169 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:00.169 12:54:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2263340 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2263340 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 2263340 ']' 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:00.169 [2024-11-18 12:54:57.085553] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:00.169 [2024-11-18 12:54:57.085603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.169 [2024-11-18 12:54:57.167320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.169 [2024-11-18 12:54:57.210119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.169 [2024-11-18 12:54:57.210157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.169 [2024-11-18 12:54:57.210164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.169 [2024-11-18 12:54:57.210170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.169 [2024-11-18 12:54:57.210175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.169 [2024-11-18 12:54:57.211780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.169 [2024-11-18 12:54:57.211888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.169 [2024-11-18 12:54:57.211998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.169 [2024-11-18 12:54:57.211999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11438 00:12:00.169 [2024-11-18 12:54:57.518087] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:00.169 { 00:12:00.169 "nqn": "nqn.2016-06.io.spdk:cnode11438", 00:12:00.169 "tgt_name": "foobar", 00:12:00.169 "method": "nvmf_create_subsystem", 00:12:00.169 "req_id": 1 00:12:00.169 } 00:12:00.169 Got JSON-RPC error response 00:12:00.169 response: 00:12:00.169 { 00:12:00.169 "code": -32603, 00:12:00.169 "message": "Unable to find target foobar" 00:12:00.169 }' 00:12:00.169 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:00.170 { 00:12:00.170 "nqn": "nqn.2016-06.io.spdk:cnode11438", 00:12:00.170 "tgt_name": "foobar", 00:12:00.170 "method": "nvmf_create_subsystem", 00:12:00.170 "req_id": 1 00:12:00.170 } 00:12:00.170 Got JSON-RPC error response 00:12:00.170 response: 00:12:00.170 { 00:12:00.170 "code": -32603, 00:12:00.170 "message": "Unable to find target foobar" 00:12:00.170 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:00.170 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:00.170 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17212 00:12:00.170 [2024-11-18 12:54:57.726824] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17212: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:00.170 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:00.170 { 00:12:00.170 "nqn": "nqn.2016-06.io.spdk:cnode17212", 00:12:00.170 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:00.170 "method": "nvmf_create_subsystem", 00:12:00.170 "req_id": 1 00:12:00.170 } 00:12:00.170 Got JSON-RPC error response 00:12:00.170 response: 00:12:00.170 { 00:12:00.170 "code": -32602, 00:12:00.170 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:00.170 }' 00:12:00.170 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:00.170 { 00:12:00.170 "nqn": "nqn.2016-06.io.spdk:cnode17212", 00:12:00.170 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:00.170 "method": "nvmf_create_subsystem", 00:12:00.170 "req_id": 1 00:12:00.170 } 00:12:00.170 Got JSON-RPC error response 00:12:00.170 response: 00:12:00.170 { 00:12:00.170 "code": -32602, 00:12:00.170 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:00.170 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:00.170 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:00.170 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32533 00:12:00.430 [2024-11-18 12:54:57.939509] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32533: invalid model number 'SPDK_Controller' 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:00.430 { 00:12:00.430 "nqn": "nqn.2016-06.io.spdk:cnode32533", 00:12:00.430 "model_number": "SPDK_Controller\u001f", 00:12:00.430 "method": "nvmf_create_subsystem", 00:12:00.430 "req_id": 1 00:12:00.430 } 00:12:00.430 Got JSON-RPC error response 00:12:00.430 response: 00:12:00.430 { 00:12:00.430 "code": -32602, 00:12:00.430 "message": "Invalid MN SPDK_Controller\u001f" 00:12:00.430 }' 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:00.430 { 00:12:00.430 "nqn": "nqn.2016-06.io.spdk:cnode32533", 00:12:00.430 "model_number": "SPDK_Controller\u001f", 00:12:00.430 "method": "nvmf_create_subsystem", 00:12:00.430 "req_id": 1 00:12:00.430 } 00:12:00.430 Got JSON-RPC error response 00:12:00.430 response: 00:12:00.430 { 00:12:00.430 "code": -32602, 00:12:00.430 "message": "Invalid MN SPDK_Controller\u001f" 00:12:00.430 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:00.430 12:54:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:00.430 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.430 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.430 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:00.430 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:00.430 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:00.430 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.430 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.430 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:00.430 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:00.430 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:00.430 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.430 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.430 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:00.430 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ O == \- ]] 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'OqI'\''jJhro e Za.A;]1qF' 00:12:00.431 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'OqI'\''jJhro e Za.A;]1qF' nqn.2016-06.io.spdk:cnode8351 00:12:00.691 [2024-11-18 12:54:58.292733] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8351: invalid serial number 'OqI'jJhro e Za.A;]1qF' 00:12:00.691 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:00.691 { 00:12:00.691 "nqn": "nqn.2016-06.io.spdk:cnode8351", 00:12:00.691 "serial_number": "OqI'\''jJhro e Za.A;]1qF", 00:12:00.691 "method": "nvmf_create_subsystem", 00:12:00.691 "req_id": 1 00:12:00.691 } 00:12:00.691 Got JSON-RPC error response 00:12:00.691 response: 00:12:00.691 { 00:12:00.691 "code": -32602, 00:12:00.691 "message": "Invalid SN OqI'\''jJhro e Za.A;]1qF" 00:12:00.691 }' 00:12:00.691 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:00.691 { 00:12:00.691 "nqn": "nqn.2016-06.io.spdk:cnode8351", 00:12:00.691 "serial_number": "OqI'jJhro e Za.A;]1qF", 00:12:00.691 "method": "nvmf_create_subsystem", 00:12:00.691 "req_id": 1 00:12:00.691 } 00:12:00.691 Got JSON-RPC error response 00:12:00.691 response: 00:12:00.691 { 00:12:00.691 "code": -32602, 00:12:00.691 "message": "Invalid SN OqI'jJhro e Za.A;]1qF" 00:12:00.691 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:00.691 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:00.691 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:00.691 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:00.691 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:00.691 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.692 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:00.953 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ | == \- ]] 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '|HQjBLk=n!2KYK|voH~1iE9Z'\''qO)H%T)*tMp#{cZG' 00:12:00.954 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '|HQjBLk=n!2KYK|voH~1iE9Z'\''qO)H%T)*tMp#{cZG' nqn.2016-06.io.spdk:cnode20197 00:12:01.214 [2024-11-18 12:54:58.766277] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20197: invalid model number '|HQjBLk=n!2KYK|voH~1iE9Z'qO)H%T)*tMp#{cZG' 00:12:01.214 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:01.214 { 00:12:01.214 "nqn": "nqn.2016-06.io.spdk:cnode20197", 00:12:01.214 "model_number": "|HQjBLk=n!2KYK|voH~1iE9Z'\''qO)H%T)*tMp#{cZG", 00:12:01.214 "method": "nvmf_create_subsystem", 00:12:01.214 "req_id": 1 00:12:01.214 } 00:12:01.214 Got JSON-RPC error response 00:12:01.214 response: 00:12:01.214 { 00:12:01.214 "code": -32602, 00:12:01.214 "message": "Invalid MN |HQjBLk=n!2KYK|voH~1iE9Z'\''qO)H%T)*tMp#{cZG" 00:12:01.214 }' 00:12:01.214 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:01.214 { 00:12:01.214 "nqn": "nqn.2016-06.io.spdk:cnode20197", 00:12:01.214 "model_number": "|HQjBLk=n!2KYK|voH~1iE9Z'qO)H%T)*tMp#{cZG", 00:12:01.214 "method": "nvmf_create_subsystem", 00:12:01.214 "req_id": 1 00:12:01.214 } 00:12:01.214 Got JSON-RPC error response 00:12:01.214 response: 00:12:01.214 { 00:12:01.214 "code": -32602, 00:12:01.214 "message": "Invalid MN |HQjBLk=n!2KYK|voH~1iE9Z'qO)H%T)*tMp#{cZG" 00:12:01.214 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:01.214 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:01.474 [2024-11-18 12:54:58.963007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.474 12:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:01.732 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:01.732 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:01.732 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:01.732 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:01.732 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:01.732 [2024-11-18 12:54:59.376361] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:01.732 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:01.732 { 00:12:01.732 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:01.732 "listen_address": { 00:12:01.732 "trtype": "tcp", 00:12:01.732 "traddr": "", 00:12:01.732 "trsvcid": "4421" 00:12:01.732 }, 00:12:01.732 "method": "nvmf_subsystem_remove_listener", 00:12:01.732 "req_id": 1 00:12:01.732 } 00:12:01.732 Got JSON-RPC error response 00:12:01.732 response: 00:12:01.732 { 00:12:01.732 "code": -32602, 00:12:01.732 "message": "Invalid parameters" 00:12:01.732 }' 00:12:01.733 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:01.733 { 00:12:01.733 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:01.733 "listen_address": { 00:12:01.733 "trtype": "tcp", 00:12:01.733 "traddr": "", 00:12:01.733 "trsvcid": "4421" 00:12:01.733 }, 00:12:01.733 "method": "nvmf_subsystem_remove_listener", 00:12:01.733 "req_id": 1 00:12:01.733 } 00:12:01.733 Got JSON-RPC error response 00:12:01.733 response: 00:12:01.733 { 00:12:01.733 "code": -32602, 00:12:01.733 "message": "Invalid parameters" 00:12:01.733 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:01.733 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3273 -i 0 00:12:01.992 [2024-11-18 12:54:59.589023] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3273: invalid cntlid range [0-65519] 00:12:01.992 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:01.992 { 00:12:01.992 "nqn": "nqn.2016-06.io.spdk:cnode3273", 00:12:01.992 "min_cntlid": 0, 00:12:01.992 "method": "nvmf_create_subsystem", 00:12:01.992 "req_id": 1 00:12:01.992 } 00:12:01.992 Got JSON-RPC error response 00:12:01.992 response: 00:12:01.992 { 00:12:01.992 "code": -32602, 00:12:01.992 "message": "Invalid cntlid range [0-65519]" 00:12:01.992 }' 00:12:01.992 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:01.992 { 00:12:01.992 "nqn": "nqn.2016-06.io.spdk:cnode3273", 00:12:01.992 "min_cntlid": 0, 00:12:01.992 "method": "nvmf_create_subsystem", 00:12:01.992 "req_id": 1 00:12:01.992 } 00:12:01.992 Got JSON-RPC error response 00:12:01.992 response: 00:12:01.992 { 00:12:01.992 "code": -32602, 00:12:01.992 "message": "Invalid cntlid range [0-65519]" 00:12:01.992 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:01.992 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6777 -i 65520 00:12:02.252 [2024-11-18 12:54:59.801754] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6777: invalid cntlid range [65520-65519] 00:12:02.252 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:02.252 { 00:12:02.252 "nqn": "nqn.2016-06.io.spdk:cnode6777", 00:12:02.252 "min_cntlid": 65520, 00:12:02.252 "method": "nvmf_create_subsystem", 00:12:02.252 "req_id": 1 00:12:02.252 } 00:12:02.252 Got JSON-RPC error response 00:12:02.252 response: 00:12:02.252 { 00:12:02.252 "code": -32602, 00:12:02.252 "message": "Invalid cntlid range [65520-65519]" 00:12:02.252 }' 00:12:02.252 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:02.252 { 00:12:02.252 "nqn": "nqn.2016-06.io.spdk:cnode6777", 00:12:02.252 "min_cntlid": 65520, 00:12:02.252 "method": "nvmf_create_subsystem", 00:12:02.252 "req_id": 1 00:12:02.252 } 00:12:02.252 Got JSON-RPC error response 00:12:02.252 response: 00:12:02.252 { 00:12:02.252 "code": -32602, 00:12:02.252 "message": "Invalid cntlid range [65520-65519]" 00:12:02.252 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:02.252 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22117 -I 0 00:12:02.512 [2024-11-18 12:55:00.002441] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22117: invalid cntlid range [1-0] 00:12:02.512 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:02.512 { 00:12:02.512 "nqn": "nqn.2016-06.io.spdk:cnode22117", 00:12:02.512 "max_cntlid": 0, 00:12:02.512 "method": "nvmf_create_subsystem", 00:12:02.512 "req_id": 1 00:12:02.512 } 00:12:02.512 Got JSON-RPC error response 00:12:02.512 response: 00:12:02.512 { 00:12:02.512 "code": -32602, 00:12:02.512 "message": "Invalid cntlid range [1-0]" 00:12:02.512 }' 00:12:02.512 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:02.512 { 00:12:02.512 "nqn": "nqn.2016-06.io.spdk:cnode22117", 00:12:02.512 "max_cntlid": 0, 00:12:02.512 "method": "nvmf_create_subsystem", 00:12:02.512 "req_id": 1 00:12:02.512 } 00:12:02.512 Got JSON-RPC error response 00:12:02.512 response: 00:12:02.512 { 00:12:02.512 "code": -32602, 00:12:02.512 "message": "Invalid cntlid range [1-0]" 00:12:02.512 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:02.512 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20510 -I 65520 00:12:02.512 [2024-11-18 12:55:00.207151] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20510: invalid cntlid range [1-65520] 00:12:02.772 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:02.772 { 00:12:02.772 "nqn": "nqn.2016-06.io.spdk:cnode20510", 00:12:02.772 "max_cntlid": 65520, 00:12:02.772 "method": "nvmf_create_subsystem", 00:12:02.772 "req_id": 1 00:12:02.772 } 00:12:02.772 Got JSON-RPC error response 00:12:02.772 response: 00:12:02.772 { 00:12:02.772 "code": -32602, 00:12:02.772 "message": "Invalid cntlid range [1-65520]" 00:12:02.772 }' 00:12:02.772 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:02.772 { 00:12:02.772 "nqn": "nqn.2016-06.io.spdk:cnode20510", 00:12:02.772 "max_cntlid": 65520, 00:12:02.772 "method": "nvmf_create_subsystem", 00:12:02.772 "req_id": 1 00:12:02.772 } 00:12:02.772 Got JSON-RPC error response 00:12:02.772 response: 00:12:02.772 { 00:12:02.772 "code": -32602, 00:12:02.772 "message": "Invalid cntlid range [1-65520]" 00:12:02.772 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:02.772 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17518 -i 6 -I 5 00:12:02.772 [2024-11-18 12:55:00.419924] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17518: invalid cntlid range [6-5] 00:12:02.772 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:02.772 { 00:12:02.772 "nqn": "nqn.2016-06.io.spdk:cnode17518", 00:12:02.772 "min_cntlid": 6, 00:12:02.772 "max_cntlid": 5, 00:12:02.772 "method": "nvmf_create_subsystem", 00:12:02.772 "req_id": 1 00:12:02.772 } 00:12:02.772 Got JSON-RPC error response 00:12:02.772 response: 00:12:02.772 { 00:12:02.772 "code": -32602, 00:12:02.772 "message": "Invalid cntlid range [6-5]" 00:12:02.772 }' 00:12:02.772 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:02.772 { 00:12:02.772 "nqn": "nqn.2016-06.io.spdk:cnode17518", 00:12:02.772 "min_cntlid": 6, 00:12:02.772 "max_cntlid": 5, 00:12:02.772 "method": "nvmf_create_subsystem", 00:12:02.772 "req_id": 1 00:12:02.772 } 00:12:02.772 Got JSON-RPC error response 00:12:02.772 response: 00:12:02.772 { 00:12:02.772 "code": -32602, 00:12:02.772 "message": "Invalid cntlid range [6-5]" 00:12:02.772 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:02.772 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:03.033 { 00:12:03.033 "name": "foobar", 00:12:03.033 "method": "nvmf_delete_target", 00:12:03.033 "req_id": 1 00:12:03.033 } 00:12:03.033 Got JSON-RPC error response 00:12:03.033 response: 00:12:03.033 { 00:12:03.033 "code": -32602, 00:12:03.033 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:03.033 }' 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:03.033 { 00:12:03.033 "name": "foobar", 00:12:03.033 "method": "nvmf_delete_target", 00:12:03.033 "req_id": 1 00:12:03.033 } 00:12:03.033 Got JSON-RPC error response 00:12:03.033 response: 00:12:03.033 { 00:12:03.033 "code": -32602, 00:12:03.033 "message": "The specified target doesn't exist, cannot delete it." 00:12:03.033 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:03.033 rmmod nvme_tcp 00:12:03.033 rmmod nvme_fabrics 00:12:03.033 rmmod nvme_keyring 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2263340 ']' 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2263340 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 2263340 ']' 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 2263340 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2263340 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2263340' 00:12:03.033 killing process with pid 2263340 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 2263340 00:12:03.033 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 2263340 00:12:03.292 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.292 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.292 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.293 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:03.293 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:03.293 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.293 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.293 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.293 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.293 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.293 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.293 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.204 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.464 00:12:05.464 real 0m12.068s 00:12:05.464 user 0m18.755s 00:12:05.464 sys 0m5.421s 00:12:05.464 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:05.464 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:05.464 ************************************ 00:12:05.464 END TEST nvmf_invalid 00:12:05.464 ************************************ 00:12:05.464 12:55:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:05.464 12:55:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:05.464 12:55:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:05.464 12:55:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.464 ************************************ 00:12:05.464 START TEST nvmf_connect_stress 00:12:05.464 ************************************ 00:12:05.464 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:05.464 * Looking for test storage... 00:12:05.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:05.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.464 --rc genhtml_branch_coverage=1 00:12:05.464 --rc genhtml_function_coverage=1 00:12:05.464 --rc genhtml_legend=1 00:12:05.464 --rc geninfo_all_blocks=1 00:12:05.464 --rc geninfo_unexecuted_blocks=1 00:12:05.464 00:12:05.464 ' 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:05.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.464 --rc genhtml_branch_coverage=1 00:12:05.464 --rc genhtml_function_coverage=1 00:12:05.464 --rc genhtml_legend=1 00:12:05.464 --rc geninfo_all_blocks=1 00:12:05.464 --rc geninfo_unexecuted_blocks=1 00:12:05.464 00:12:05.464 ' 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:05.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.464 --rc genhtml_branch_coverage=1 00:12:05.464 --rc genhtml_function_coverage=1 00:12:05.464 --rc genhtml_legend=1 00:12:05.464 --rc geninfo_all_blocks=1 00:12:05.464 --rc geninfo_unexecuted_blocks=1 00:12:05.464 00:12:05.464 ' 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:05.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.464 --rc genhtml_branch_coverage=1 00:12:05.464 --rc genhtml_function_coverage=1 00:12:05.464 --rc genhtml_legend=1 00:12:05.464 --rc geninfo_all_blocks=1 00:12:05.464 --rc geninfo_unexecuted_blocks=1 00:12:05.464 00:12:05.464 ' 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.464 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.724 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.724 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.724 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.724 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.724 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.725 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.313 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.313 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:12.313 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:12.313 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:12.313 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:12.313 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:12.313 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:12.314 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:12.314 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:12.314 Found net devices under 0000:86:00.0: cvl_0_0 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:12.314 Found net devices under 0000:86:00.1: cvl_0_1 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.314 12:55:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.314 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.314 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.314 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:12.314 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.314 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.314 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.314 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:12.314 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:12.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:12:12.314 00:12:12.314 --- 10.0.0.2 ping statistics --- 00:12:12.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.314 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:12:12.314 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:12:12.314 00:12:12.314 --- 10.0.0.1 ping statistics --- 00:12:12.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.314 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:12:12.314 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.314 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:12.314 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.314 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2267511 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2267511 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 2267511 ']' 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.315 [2024-11-18 12:55:09.256515] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:12.315 [2024-11-18 12:55:09.256562] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.315 [2024-11-18 12:55:09.336350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:12.315 [2024-11-18 12:55:09.378657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.315 [2024-11-18 12:55:09.378694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.315 [2024-11-18 12:55:09.378704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.315 [2024-11-18 12:55:09.378710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.315 [2024-11-18 12:55:09.378715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.315 [2024-11-18 12:55:09.380238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.315 [2024-11-18 12:55:09.380343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.315 [2024-11-18 12:55:09.380344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.315 [2024-11-18 12:55:09.517468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.315 [2024-11-18 12:55:09.537667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.315 NULL1 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2267644 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:12.315 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.316 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.316 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.316 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.316 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:12.316 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.316 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.316 12:55:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.885 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.885 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:12.885 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.885 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.885 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.145 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.145 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:13.145 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.145 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.145 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.418 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.418 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:13.418 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.418 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.418 12:55:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.691 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.691 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:13.691 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.691 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.691 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.975 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.975 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:13.975 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.975 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.975 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.271 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.271 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:14.271 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.271 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.271 12:55:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.551 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.551 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:14.551 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.551 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.551 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.871 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.871 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:14.871 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.871 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.138 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.412 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.412 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:15.412 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.412 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.412 12:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.698 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.698 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:15.698 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.698 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.698 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.987 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.987 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:15.987 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.987 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.987 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.271 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.271 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:16.271 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.271 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.271 12:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.550 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.550 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:16.550 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.550 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.550 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.831 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.831 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:16.831 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.831 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.831 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.435 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.435 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:17.435 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.435 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.435 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.723 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.723 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:17.723 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.723 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.723 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.995 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.995 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:17.995 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.995 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.995 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.270 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.270 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:18.270 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.270 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.270 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.550 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.550 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:18.550 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.550 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.550 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.828 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.828 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:18.828 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.828 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.828 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.108 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.108 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:19.108 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.108 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.108 12:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.686 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.686 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:19.686 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.686 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.686 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.945 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.945 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:19.945 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.945 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.945 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.205 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.205 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:20.205 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.205 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.205 12:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.464 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.464 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:20.464 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.464 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.464 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.034 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.034 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:21.034 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.034 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.034 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.293 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.293 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:21.293 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.293 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.293 12:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.553 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.553 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:21.553 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.553 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.553 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.812 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.812 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:21.812 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.812 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.812 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.071 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.071 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:22.071 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.071 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.071 12:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.071 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2267644 00:12:22.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2267644) - No such process 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2267644 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:22.643 rmmod nvme_tcp 00:12:22.643 rmmod nvme_fabrics 00:12:22.643 rmmod nvme_keyring 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:22.643 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:22.644 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2267511 ']' 00:12:22.644 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2267511 00:12:22.644 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 2267511 ']' 00:12:22.644 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 2267511 00:12:22.644 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:12:22.644 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:22.644 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2267511 00:12:22.644 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:22.644 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:22.644 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2267511' 00:12:22.644 killing process with pid 2267511 00:12:22.644 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 2267511 00:12:22.644 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 2267511 00:12:22.904 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:22.904 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:22.904 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:22.904 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:22.904 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:22.904 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:22.904 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:22.904 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.904 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:22.904 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.904 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.904 12:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.816 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.816 00:12:24.816 real 0m19.445s 00:12:24.816 user 0m40.594s 00:12:24.816 sys 0m8.565s 00:12:24.816 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:24.816 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.816 ************************************ 00:12:24.816 END TEST nvmf_connect_stress 00:12:24.816 ************************************ 00:12:24.816 12:55:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:24.816 12:55:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:24.816 12:55:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:24.816 12:55:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.816 ************************************ 00:12:24.816 START TEST nvmf_fused_ordering 00:12:24.816 ************************************ 00:12:24.816 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:25.077 * Looking for test storage... 00:12:25.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:25.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.077 --rc genhtml_branch_coverage=1 00:12:25.077 --rc genhtml_function_coverage=1 00:12:25.077 --rc genhtml_legend=1 00:12:25.077 --rc geninfo_all_blocks=1 00:12:25.077 --rc geninfo_unexecuted_blocks=1 00:12:25.077 00:12:25.077 ' 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:25.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.077 --rc genhtml_branch_coverage=1 00:12:25.077 --rc genhtml_function_coverage=1 00:12:25.077 --rc genhtml_legend=1 00:12:25.077 --rc geninfo_all_blocks=1 00:12:25.077 --rc geninfo_unexecuted_blocks=1 00:12:25.077 00:12:25.077 ' 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:25.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.077 --rc genhtml_branch_coverage=1 00:12:25.077 --rc genhtml_function_coverage=1 00:12:25.077 --rc genhtml_legend=1 00:12:25.077 --rc geninfo_all_blocks=1 00:12:25.077 --rc geninfo_unexecuted_blocks=1 00:12:25.077 00:12:25.077 ' 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:25.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.077 --rc genhtml_branch_coverage=1 00:12:25.077 --rc genhtml_function_coverage=1 00:12:25.077 --rc genhtml_legend=1 00:12:25.077 --rc geninfo_all_blocks=1 00:12:25.077 --rc geninfo_unexecuted_blocks=1 00:12:25.077 00:12:25.077 ' 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:25.077 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:25.078 12:55:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:31.661 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:31.661 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:31.661 Found net devices under 0000:86:00.0: cvl_0_0 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:31.661 Found net devices under 0000:86:00.1: cvl_0_1 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:31.661 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:31.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:12:31.662 00:12:31.662 --- 10.0.0.2 ping statistics --- 00:12:31.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.662 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:31.662 00:12:31.662 --- 10.0.0.1 ping statistics --- 00:12:31.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.662 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2272940 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2272940 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 2272940 ']' 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.662 [2024-11-18 12:55:28.776849] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:31.662 [2024-11-18 12:55:28.776901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.662 [2024-11-18 12:55:28.859172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.662 [2024-11-18 12:55:28.900821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.662 [2024-11-18 12:55:28.900859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.662 [2024-11-18 12:55:28.900866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.662 [2024-11-18 12:55:28.900872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.662 [2024-11-18 12:55:28.900877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.662 [2024-11-18 12:55:28.901429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:31.662 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.662 [2024-11-18 12:55:29.035543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.662 [2024-11-18 12:55:29.055718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.662 NULL1 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.662 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:31.662 [2024-11-18 12:55:29.109370] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:31.663 [2024-11-18 12:55:29.109400] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272962 ] 00:12:31.922 Attached to nqn.2016-06.io.spdk:cnode1 00:12:31.922 Namespace ID: 1 size: 1GB 00:12:31.922 fused_ordering(0) 00:12:31.922 fused_ordering(1) 00:12:31.922 fused_ordering(2) 00:12:31.922 fused_ordering(3) 00:12:31.922 fused_ordering(4) 00:12:31.922 fused_ordering(5) 00:12:31.922 fused_ordering(6) 00:12:31.922 fused_ordering(7) 00:12:31.922 fused_ordering(8) 00:12:31.922 fused_ordering(9) 00:12:31.922 fused_ordering(10) 00:12:31.922 fused_ordering(11) 00:12:31.922 fused_ordering(12) 00:12:31.922 fused_ordering(13) 00:12:31.922 fused_ordering(14) 00:12:31.922 fused_ordering(15) 00:12:31.922 fused_ordering(16) 00:12:31.922 fused_ordering(17) 00:12:31.922 fused_ordering(18) 00:12:31.922 fused_ordering(19) 00:12:31.922 fused_ordering(20) 00:12:31.922 fused_ordering(21) 00:12:31.922 fused_ordering(22) 00:12:31.922 fused_ordering(23) 00:12:31.922 fused_ordering(24) 00:12:31.922 fused_ordering(25) 00:12:31.922 fused_ordering(26) 00:12:31.922 fused_ordering(27) 00:12:31.922 fused_ordering(28) 00:12:31.922 fused_ordering(29) 00:12:31.922 fused_ordering(30) 00:12:31.922 fused_ordering(31) 00:12:31.922 fused_ordering(32) 00:12:31.922 fused_ordering(33) 00:12:31.922 fused_ordering(34) 00:12:31.922 fused_ordering(35) 00:12:31.922 fused_ordering(36) 00:12:31.922 fused_ordering(37) 00:12:31.922 fused_ordering(38) 00:12:31.922 fused_ordering(39) 00:12:31.922 fused_ordering(40) 00:12:31.922 fused_ordering(41) 00:12:31.922 fused_ordering(42) 00:12:31.922 fused_ordering(43) 00:12:31.922 fused_ordering(44) 00:12:31.922 fused_ordering(45) 00:12:31.922 fused_ordering(46) 00:12:31.922 fused_ordering(47) 00:12:31.922 fused_ordering(48) 00:12:31.922 fused_ordering(49) 00:12:31.922 fused_ordering(50) 00:12:31.922 fused_ordering(51) 00:12:31.922 fused_ordering(52) 00:12:31.922 fused_ordering(53) 00:12:31.922 fused_ordering(54) 00:12:31.922 fused_ordering(55) 00:12:31.922 fused_ordering(56) 00:12:31.922 fused_ordering(57) 00:12:31.922 fused_ordering(58) 00:12:31.922 fused_ordering(59) 00:12:31.922 fused_ordering(60) 00:12:31.922 fused_ordering(61) 00:12:31.922 fused_ordering(62) 00:12:31.922 fused_ordering(63) 00:12:31.922 fused_ordering(64) 00:12:31.922 fused_ordering(65) 00:12:31.922 fused_ordering(66) 00:12:31.922 fused_ordering(67) 00:12:31.922 fused_ordering(68) 00:12:31.922 fused_ordering(69) 00:12:31.922 fused_ordering(70) 00:12:31.922 fused_ordering(71) 00:12:31.922 fused_ordering(72) 00:12:31.922 fused_ordering(73) 00:12:31.922 fused_ordering(74) 00:12:31.922 fused_ordering(75) 00:12:31.922 fused_ordering(76) 00:12:31.922 fused_ordering(77) 00:12:31.922 fused_ordering(78) 00:12:31.922 fused_ordering(79) 00:12:31.922 fused_ordering(80) 00:12:31.922 fused_ordering(81) 00:12:31.922 fused_ordering(82) 00:12:31.922 fused_ordering(83) 00:12:31.922 fused_ordering(84) 00:12:31.922 fused_ordering(85) 00:12:31.922 fused_ordering(86) 00:12:31.922 fused_ordering(87) 00:12:31.922 fused_ordering(88) 00:12:31.922 fused_ordering(89) 00:12:31.922 fused_ordering(90) 00:12:31.922 fused_ordering(91) 00:12:31.922 fused_ordering(92) 00:12:31.922 fused_ordering(93) 00:12:31.922 fused_ordering(94) 00:12:31.922 fused_ordering(95) 00:12:31.922 fused_ordering(96) 00:12:31.922 fused_ordering(97) 00:12:31.922 fused_ordering(98) 00:12:31.922 fused_ordering(99) 00:12:31.922 fused_ordering(100) 00:12:31.922 fused_ordering(101) 00:12:31.922 fused_ordering(102) 00:12:31.922 fused_ordering(103) 00:12:31.922 fused_ordering(104) 00:12:31.922 fused_ordering(105) 00:12:31.922 fused_ordering(106) 00:12:31.922 fused_ordering(107) 00:12:31.922 fused_ordering(108) 00:12:31.922 fused_ordering(109) 00:12:31.922 fused_ordering(110) 00:12:31.922 fused_ordering(111) 00:12:31.922 fused_ordering(112) 00:12:31.922 fused_ordering(113) 00:12:31.922 fused_ordering(114) 00:12:31.922 fused_ordering(115) 00:12:31.922 fused_ordering(116) 00:12:31.922 fused_ordering(117) 00:12:31.922 fused_ordering(118) 00:12:31.922 fused_ordering(119) 00:12:31.922 fused_ordering(120) 00:12:31.922 fused_ordering(121) 00:12:31.922 fused_ordering(122) 00:12:31.922 fused_ordering(123) 00:12:31.922 fused_ordering(124) 00:12:31.922 fused_ordering(125) 00:12:31.922 fused_ordering(126) 00:12:31.922 fused_ordering(127) 00:12:31.922 fused_ordering(128) 00:12:31.922 fused_ordering(129) 00:12:31.922 fused_ordering(130) 00:12:31.922 fused_ordering(131) 00:12:31.922 fused_ordering(132) 00:12:31.922 fused_ordering(133) 00:12:31.922 fused_ordering(134) 00:12:31.922 fused_ordering(135) 00:12:31.922 fused_ordering(136) 00:12:31.922 fused_ordering(137) 00:12:31.922 fused_ordering(138) 00:12:31.922 fused_ordering(139) 00:12:31.922 fused_ordering(140) 00:12:31.922 fused_ordering(141) 00:12:31.922 fused_ordering(142) 00:12:31.922 fused_ordering(143) 00:12:31.922 fused_ordering(144) 00:12:31.922 fused_ordering(145) 00:12:31.922 fused_ordering(146) 00:12:31.922 fused_ordering(147) 00:12:31.922 fused_ordering(148) 00:12:31.922 fused_ordering(149) 00:12:31.922 fused_ordering(150) 00:12:31.922 fused_ordering(151) 00:12:31.922 fused_ordering(152) 00:12:31.922 fused_ordering(153) 00:12:31.922 fused_ordering(154) 00:12:31.922 fused_ordering(155) 00:12:31.922 fused_ordering(156) 00:12:31.922 fused_ordering(157) 00:12:31.922 fused_ordering(158) 00:12:31.922 fused_ordering(159) 00:12:31.923 fused_ordering(160) 00:12:31.923 fused_ordering(161) 00:12:31.923 fused_ordering(162) 00:12:31.923 fused_ordering(163) 00:12:31.923 fused_ordering(164) 00:12:31.923 fused_ordering(165) 00:12:31.923 fused_ordering(166) 00:12:31.923 fused_ordering(167) 00:12:31.923 fused_ordering(168) 00:12:31.923 fused_ordering(169) 00:12:31.923 fused_ordering(170) 00:12:31.923 fused_ordering(171) 00:12:31.923 fused_ordering(172) 00:12:31.923 fused_ordering(173) 00:12:31.923 fused_ordering(174) 00:12:31.923 fused_ordering(175) 00:12:31.923 fused_ordering(176) 00:12:31.923 fused_ordering(177) 00:12:31.923 fused_ordering(178) 00:12:31.923 fused_ordering(179) 00:12:31.923 fused_ordering(180) 00:12:31.923 fused_ordering(181) 00:12:31.923 fused_ordering(182) 00:12:31.923 fused_ordering(183) 00:12:31.923 fused_ordering(184) 00:12:31.923 fused_ordering(185) 00:12:31.923 fused_ordering(186) 00:12:31.923 fused_ordering(187) 00:12:31.923 fused_ordering(188) 00:12:31.923 fused_ordering(189) 00:12:31.923 fused_ordering(190) 00:12:31.923 fused_ordering(191) 00:12:31.923 fused_ordering(192) 00:12:31.923 fused_ordering(193) 00:12:31.923 fused_ordering(194) 00:12:31.923 fused_ordering(195) 00:12:31.923 fused_ordering(196) 00:12:31.923 fused_ordering(197) 00:12:31.923 fused_ordering(198) 00:12:31.923 fused_ordering(199) 00:12:31.923 fused_ordering(200) 00:12:31.923 fused_ordering(201) 00:12:31.923 fused_ordering(202) 00:12:31.923 fused_ordering(203) 00:12:31.923 fused_ordering(204) 00:12:31.923 fused_ordering(205) 00:12:32.183 fused_ordering(206) 00:12:32.183 fused_ordering(207) 00:12:32.183 fused_ordering(208) 00:12:32.183 fused_ordering(209) 00:12:32.183 fused_ordering(210) 00:12:32.183 fused_ordering(211) 00:12:32.183 fused_ordering(212) 00:12:32.183 fused_ordering(213) 00:12:32.183 fused_ordering(214) 00:12:32.183 fused_ordering(215) 00:12:32.183 fused_ordering(216) 00:12:32.183 fused_ordering(217) 00:12:32.183 fused_ordering(218) 00:12:32.183 fused_ordering(219) 00:12:32.183 fused_ordering(220) 00:12:32.183 fused_ordering(221) 00:12:32.183 fused_ordering(222) 00:12:32.183 fused_ordering(223) 00:12:32.183 fused_ordering(224) 00:12:32.183 fused_ordering(225) 00:12:32.183 fused_ordering(226) 00:12:32.183 fused_ordering(227) 00:12:32.183 fused_ordering(228) 00:12:32.183 fused_ordering(229) 00:12:32.183 fused_ordering(230) 00:12:32.183 fused_ordering(231) 00:12:32.183 fused_ordering(232) 00:12:32.183 fused_ordering(233) 00:12:32.183 fused_ordering(234) 00:12:32.183 fused_ordering(235) 00:12:32.183 fused_ordering(236) 00:12:32.183 fused_ordering(237) 00:12:32.183 fused_ordering(238) 00:12:32.183 fused_ordering(239) 00:12:32.183 fused_ordering(240) 00:12:32.183 fused_ordering(241) 00:12:32.183 fused_ordering(242) 00:12:32.183 fused_ordering(243) 00:12:32.183 fused_ordering(244) 00:12:32.183 fused_ordering(245) 00:12:32.183 fused_ordering(246) 00:12:32.183 fused_ordering(247) 00:12:32.183 fused_ordering(248) 00:12:32.183 fused_ordering(249) 00:12:32.183 fused_ordering(250) 00:12:32.183 fused_ordering(251) 00:12:32.183 fused_ordering(252) 00:12:32.183 fused_ordering(253) 00:12:32.183 fused_ordering(254) 00:12:32.183 fused_ordering(255) 00:12:32.183 fused_ordering(256) 00:12:32.183 fused_ordering(257) 00:12:32.183 fused_ordering(258) 00:12:32.183 fused_ordering(259) 00:12:32.183 fused_ordering(260) 00:12:32.183 fused_ordering(261) 00:12:32.183 fused_ordering(262) 00:12:32.183 fused_ordering(263) 00:12:32.183 fused_ordering(264) 00:12:32.183 fused_ordering(265) 00:12:32.183 fused_ordering(266) 00:12:32.183 fused_ordering(267) 00:12:32.183 fused_ordering(268) 00:12:32.183 fused_ordering(269) 00:12:32.183 fused_ordering(270) 00:12:32.183 fused_ordering(271) 00:12:32.183 fused_ordering(272) 00:12:32.183 fused_ordering(273) 00:12:32.183 fused_ordering(274) 00:12:32.183 fused_ordering(275) 00:12:32.183 fused_ordering(276) 00:12:32.183 fused_ordering(277) 00:12:32.183 fused_ordering(278) 00:12:32.183 fused_ordering(279) 00:12:32.183 fused_ordering(280) 00:12:32.183 fused_ordering(281) 00:12:32.183 fused_ordering(282) 00:12:32.183 fused_ordering(283) 00:12:32.183 fused_ordering(284) 00:12:32.183 fused_ordering(285) 00:12:32.183 fused_ordering(286) 00:12:32.183 fused_ordering(287) 00:12:32.183 fused_ordering(288) 00:12:32.183 fused_ordering(289) 00:12:32.183 fused_ordering(290) 00:12:32.183 fused_ordering(291) 00:12:32.183 fused_ordering(292) 00:12:32.183 fused_ordering(293) 00:12:32.183 fused_ordering(294) 00:12:32.183 fused_ordering(295) 00:12:32.183 fused_ordering(296) 00:12:32.183 fused_ordering(297) 00:12:32.183 fused_ordering(298) 00:12:32.183 fused_ordering(299) 00:12:32.183 fused_ordering(300) 00:12:32.183 fused_ordering(301) 00:12:32.183 fused_ordering(302) 00:12:32.183 fused_ordering(303) 00:12:32.183 fused_ordering(304) 00:12:32.183 fused_ordering(305) 00:12:32.183 fused_ordering(306) 00:12:32.183 fused_ordering(307) 00:12:32.183 fused_ordering(308) 00:12:32.183 fused_ordering(309) 00:12:32.183 fused_ordering(310) 00:12:32.183 fused_ordering(311) 00:12:32.183 fused_ordering(312) 00:12:32.183 fused_ordering(313) 00:12:32.183 fused_ordering(314) 00:12:32.183 fused_ordering(315) 00:12:32.183 fused_ordering(316) 00:12:32.183 fused_ordering(317) 00:12:32.183 fused_ordering(318) 00:12:32.183 fused_ordering(319) 00:12:32.183 fused_ordering(320) 00:12:32.183 fused_ordering(321) 00:12:32.183 fused_ordering(322) 00:12:32.183 fused_ordering(323) 00:12:32.183 fused_ordering(324) 00:12:32.183 fused_ordering(325) 00:12:32.183 fused_ordering(326) 00:12:32.183 fused_ordering(327) 00:12:32.183 fused_ordering(328) 00:12:32.183 fused_ordering(329) 00:12:32.183 fused_ordering(330) 00:12:32.183 fused_ordering(331) 00:12:32.183 fused_ordering(332) 00:12:32.183 fused_ordering(333) 00:12:32.183 fused_ordering(334) 00:12:32.183 fused_ordering(335) 00:12:32.183 fused_ordering(336) 00:12:32.183 fused_ordering(337) 00:12:32.183 fused_ordering(338) 00:12:32.183 fused_ordering(339) 00:12:32.183 fused_ordering(340) 00:12:32.183 fused_ordering(341) 00:12:32.183 fused_ordering(342) 00:12:32.183 fused_ordering(343) 00:12:32.183 fused_ordering(344) 00:12:32.183 fused_ordering(345) 00:12:32.183 fused_ordering(346) 00:12:32.183 fused_ordering(347) 00:12:32.183 fused_ordering(348) 00:12:32.183 fused_ordering(349) 00:12:32.183 fused_ordering(350) 00:12:32.183 fused_ordering(351) 00:12:32.183 fused_ordering(352) 00:12:32.183 fused_ordering(353) 00:12:32.183 fused_ordering(354) 00:12:32.183 fused_ordering(355) 00:12:32.183 fused_ordering(356) 00:12:32.183 fused_ordering(357) 00:12:32.183 fused_ordering(358) 00:12:32.183 fused_ordering(359) 00:12:32.183 fused_ordering(360) 00:12:32.183 fused_ordering(361) 00:12:32.183 fused_ordering(362) 00:12:32.183 fused_ordering(363) 00:12:32.183 fused_ordering(364) 00:12:32.183 fused_ordering(365) 00:12:32.183 fused_ordering(366) 00:12:32.183 fused_ordering(367) 00:12:32.183 fused_ordering(368) 00:12:32.183 fused_ordering(369) 00:12:32.183 fused_ordering(370) 00:12:32.184 fused_ordering(371) 00:12:32.184 fused_ordering(372) 00:12:32.184 fused_ordering(373) 00:12:32.184 fused_ordering(374) 00:12:32.184 fused_ordering(375) 00:12:32.184 fused_ordering(376) 00:12:32.184 fused_ordering(377) 00:12:32.184 fused_ordering(378) 00:12:32.184 fused_ordering(379) 00:12:32.184 fused_ordering(380) 00:12:32.184 fused_ordering(381) 00:12:32.184 fused_ordering(382) 00:12:32.184 fused_ordering(383) 00:12:32.184 fused_ordering(384) 00:12:32.184 fused_ordering(385) 00:12:32.184 fused_ordering(386) 00:12:32.184 fused_ordering(387) 00:12:32.184 fused_ordering(388) 00:12:32.184 fused_ordering(389) 00:12:32.184 fused_ordering(390) 00:12:32.184 fused_ordering(391) 00:12:32.184 fused_ordering(392) 00:12:32.184 fused_ordering(393) 00:12:32.184 fused_ordering(394) 00:12:32.184 fused_ordering(395) 00:12:32.184 fused_ordering(396) 00:12:32.184 fused_ordering(397) 00:12:32.184 fused_ordering(398) 00:12:32.184 fused_ordering(399) 00:12:32.184 fused_ordering(400) 00:12:32.184 fused_ordering(401) 00:12:32.184 fused_ordering(402) 00:12:32.184 fused_ordering(403) 00:12:32.184 fused_ordering(404) 00:12:32.184 fused_ordering(405) 00:12:32.184 fused_ordering(406) 00:12:32.184 fused_ordering(407) 00:12:32.184 fused_ordering(408) 00:12:32.184 fused_ordering(409) 00:12:32.184 fused_ordering(410) 00:12:32.755 fused_ordering(411) 00:12:32.755 fused_ordering(412) 00:12:32.755 fused_ordering(413) 00:12:32.755 fused_ordering(414) 00:12:32.755 fused_ordering(415) 00:12:32.755 fused_ordering(416) 00:12:32.755 fused_ordering(417) 00:12:32.755 fused_ordering(418) 00:12:32.755 fused_ordering(419) 00:12:32.755 fused_ordering(420) 00:12:32.755 fused_ordering(421) 00:12:32.755 fused_ordering(422) 00:12:32.755 fused_ordering(423) 00:12:32.755 fused_ordering(424) 00:12:32.755 fused_ordering(425) 00:12:32.755 fused_ordering(426) 00:12:32.755 fused_ordering(427) 00:12:32.755 fused_ordering(428) 00:12:32.755 fused_ordering(429) 00:12:32.755 fused_ordering(430) 00:12:32.755 fused_ordering(431) 00:12:32.755 fused_ordering(432) 00:12:32.755 fused_ordering(433) 00:12:32.755 fused_ordering(434) 00:12:32.755 fused_ordering(435) 00:12:32.755 fused_ordering(436) 00:12:32.755 fused_ordering(437) 00:12:32.755 fused_ordering(438) 00:12:32.755 fused_ordering(439) 00:12:32.755 fused_ordering(440) 00:12:32.755 fused_ordering(441) 00:12:32.755 fused_ordering(442) 00:12:32.755 fused_ordering(443) 00:12:32.755 fused_ordering(444) 00:12:32.755 fused_ordering(445) 00:12:32.755 fused_ordering(446) 00:12:32.755 fused_ordering(447) 00:12:32.755 fused_ordering(448) 00:12:32.755 fused_ordering(449) 00:12:32.755 fused_ordering(450) 00:12:32.755 fused_ordering(451) 00:12:32.755 fused_ordering(452) 00:12:32.755 fused_ordering(453) 00:12:32.755 fused_ordering(454) 00:12:32.755 fused_ordering(455) 00:12:32.755 fused_ordering(456) 00:12:32.755 fused_ordering(457) 00:12:32.755 fused_ordering(458) 00:12:32.755 fused_ordering(459) 00:12:32.755 fused_ordering(460) 00:12:32.755 fused_ordering(461) 00:12:32.755 fused_ordering(462) 00:12:32.755 fused_ordering(463) 00:12:32.755 fused_ordering(464) 00:12:32.755 fused_ordering(465) 00:12:32.755 fused_ordering(466) 00:12:32.755 fused_ordering(467) 00:12:32.755 fused_ordering(468) 00:12:32.755 fused_ordering(469) 00:12:32.755 fused_ordering(470) 00:12:32.755 fused_ordering(471) 00:12:32.755 fused_ordering(472) 00:12:32.755 fused_ordering(473) 00:12:32.755 fused_ordering(474) 00:12:32.755 fused_ordering(475) 00:12:32.755 fused_ordering(476) 00:12:32.755 fused_ordering(477) 00:12:32.755 fused_ordering(478) 00:12:32.755 fused_ordering(479) 00:12:32.755 fused_ordering(480) 00:12:32.755 fused_ordering(481) 00:12:32.755 fused_ordering(482) 00:12:32.755 fused_ordering(483) 00:12:32.755 fused_ordering(484) 00:12:32.755 fused_ordering(485) 00:12:32.755 fused_ordering(486) 00:12:32.755 fused_ordering(487) 00:12:32.755 fused_ordering(488) 00:12:32.755 fused_ordering(489) 00:12:32.755 fused_ordering(490) 00:12:32.755 fused_ordering(491) 00:12:32.755 fused_ordering(492) 00:12:32.755 fused_ordering(493) 00:12:32.755 fused_ordering(494) 00:12:32.755 fused_ordering(495) 00:12:32.755 fused_ordering(496) 00:12:32.755 fused_ordering(497) 00:12:32.755 fused_ordering(498) 00:12:32.755 fused_ordering(499) 00:12:32.755 fused_ordering(500) 00:12:32.755 fused_ordering(501) 00:12:32.755 fused_ordering(502) 00:12:32.755 fused_ordering(503) 00:12:32.755 fused_ordering(504) 00:12:32.755 fused_ordering(505) 00:12:32.755 fused_ordering(506) 00:12:32.755 fused_ordering(507) 00:12:32.755 fused_ordering(508) 00:12:32.755 fused_ordering(509) 00:12:32.755 fused_ordering(510) 00:12:32.755 fused_ordering(511) 00:12:32.755 fused_ordering(512) 00:12:32.755 fused_ordering(513) 00:12:32.755 fused_ordering(514) 00:12:32.755 fused_ordering(515) 00:12:32.755 fused_ordering(516) 00:12:32.755 fused_ordering(517) 00:12:32.755 fused_ordering(518) 00:12:32.755 fused_ordering(519) 00:12:32.755 fused_ordering(520) 00:12:32.755 fused_ordering(521) 00:12:32.755 fused_ordering(522) 00:12:32.755 fused_ordering(523) 00:12:32.755 fused_ordering(524) 00:12:32.755 fused_ordering(525) 00:12:32.755 fused_ordering(526) 00:12:32.755 fused_ordering(527) 00:12:32.755 fused_ordering(528) 00:12:32.755 fused_ordering(529) 00:12:32.755 fused_ordering(530) 00:12:32.755 fused_ordering(531) 00:12:32.755 fused_ordering(532) 00:12:32.755 fused_ordering(533) 00:12:32.755 fused_ordering(534) 00:12:32.755 fused_ordering(535) 00:12:32.755 fused_ordering(536) 00:12:32.755 fused_ordering(537) 00:12:32.755 fused_ordering(538) 00:12:32.755 fused_ordering(539) 00:12:32.755 fused_ordering(540) 00:12:32.755 fused_ordering(541) 00:12:32.755 fused_ordering(542) 00:12:32.755 fused_ordering(543) 00:12:32.755 fused_ordering(544) 00:12:32.755 fused_ordering(545) 00:12:32.755 fused_ordering(546) 00:12:32.755 fused_ordering(547) 00:12:32.755 fused_ordering(548) 00:12:32.755 fused_ordering(549) 00:12:32.755 fused_ordering(550) 00:12:32.755 fused_ordering(551) 00:12:32.755 fused_ordering(552) 00:12:32.755 fused_ordering(553) 00:12:32.755 fused_ordering(554) 00:12:32.755 fused_ordering(555) 00:12:32.755 fused_ordering(556) 00:12:32.755 fused_ordering(557) 00:12:32.755 fused_ordering(558) 00:12:32.755 fused_ordering(559) 00:12:32.755 fused_ordering(560) 00:12:32.755 fused_ordering(561) 00:12:32.755 fused_ordering(562) 00:12:32.755 fused_ordering(563) 00:12:32.755 fused_ordering(564) 00:12:32.755 fused_ordering(565) 00:12:32.755 fused_ordering(566) 00:12:32.755 fused_ordering(567) 00:12:32.755 fused_ordering(568) 00:12:32.755 fused_ordering(569) 00:12:32.755 fused_ordering(570) 00:12:32.755 fused_ordering(571) 00:12:32.755 fused_ordering(572) 00:12:32.755 fused_ordering(573) 00:12:32.755 fused_ordering(574) 00:12:32.755 fused_ordering(575) 00:12:32.755 fused_ordering(576) 00:12:32.755 fused_ordering(577) 00:12:32.755 fused_ordering(578) 00:12:32.755 fused_ordering(579) 00:12:32.755 fused_ordering(580) 00:12:32.755 fused_ordering(581) 00:12:32.755 fused_ordering(582) 00:12:32.755 fused_ordering(583) 00:12:32.755 fused_ordering(584) 00:12:32.755 fused_ordering(585) 00:12:32.755 fused_ordering(586) 00:12:32.755 fused_ordering(587) 00:12:32.755 fused_ordering(588) 00:12:32.755 fused_ordering(589) 00:12:32.755 fused_ordering(590) 00:12:32.755 fused_ordering(591) 00:12:32.755 fused_ordering(592) 00:12:32.755 fused_ordering(593) 00:12:32.756 fused_ordering(594) 00:12:32.756 fused_ordering(595) 00:12:32.756 fused_ordering(596) 00:12:32.756 fused_ordering(597) 00:12:32.756 fused_ordering(598) 00:12:32.756 fused_ordering(599) 00:12:32.756 fused_ordering(600) 00:12:32.756 fused_ordering(601) 00:12:32.756 fused_ordering(602) 00:12:32.756 fused_ordering(603) 00:12:32.756 fused_ordering(604) 00:12:32.756 fused_ordering(605) 00:12:32.756 fused_ordering(606) 00:12:32.756 fused_ordering(607) 00:12:32.756 fused_ordering(608) 00:12:32.756 fused_ordering(609) 00:12:32.756 fused_ordering(610) 00:12:32.756 fused_ordering(611) 00:12:32.756 fused_ordering(612) 00:12:32.756 fused_ordering(613) 00:12:32.756 fused_ordering(614) 00:12:32.756 fused_ordering(615) 00:12:33.016 fused_ordering(616) 00:12:33.016 fused_ordering(617) 00:12:33.016 fused_ordering(618) 00:12:33.016 fused_ordering(619) 00:12:33.016 fused_ordering(620) 00:12:33.016 fused_ordering(621) 00:12:33.016 fused_ordering(622) 00:12:33.016 fused_ordering(623) 00:12:33.016 fused_ordering(624) 00:12:33.016 fused_ordering(625) 00:12:33.016 fused_ordering(626) 00:12:33.016 fused_ordering(627) 00:12:33.016 fused_ordering(628) 00:12:33.016 fused_ordering(629) 00:12:33.016 fused_ordering(630) 00:12:33.016 fused_ordering(631) 00:12:33.016 fused_ordering(632) 00:12:33.016 fused_ordering(633) 00:12:33.016 fused_ordering(634) 00:12:33.016 fused_ordering(635) 00:12:33.016 fused_ordering(636) 00:12:33.016 fused_ordering(637) 00:12:33.016 fused_ordering(638) 00:12:33.016 fused_ordering(639) 00:12:33.016 fused_ordering(640) 00:12:33.016 fused_ordering(641) 00:12:33.016 fused_ordering(642) 00:12:33.016 fused_ordering(643) 00:12:33.016 fused_ordering(644) 00:12:33.016 fused_ordering(645) 00:12:33.016 fused_ordering(646) 00:12:33.016 fused_ordering(647) 00:12:33.016 fused_ordering(648) 00:12:33.016 fused_ordering(649) 00:12:33.016 fused_ordering(650) 00:12:33.016 fused_ordering(651) 00:12:33.016 fused_ordering(652) 00:12:33.016 fused_ordering(653) 00:12:33.016 fused_ordering(654) 00:12:33.016 fused_ordering(655) 00:12:33.016 fused_ordering(656) 00:12:33.016 fused_ordering(657) 00:12:33.016 fused_ordering(658) 00:12:33.016 fused_ordering(659) 00:12:33.016 fused_ordering(660) 00:12:33.016 fused_ordering(661) 00:12:33.016 fused_ordering(662) 00:12:33.016 fused_ordering(663) 00:12:33.016 fused_ordering(664) 00:12:33.016 fused_ordering(665) 00:12:33.016 fused_ordering(666) 00:12:33.016 fused_ordering(667) 00:12:33.016 fused_ordering(668) 00:12:33.016 fused_ordering(669) 00:12:33.016 fused_ordering(670) 00:12:33.016 fused_ordering(671) 00:12:33.016 fused_ordering(672) 00:12:33.016 fused_ordering(673) 00:12:33.016 fused_ordering(674) 00:12:33.016 fused_ordering(675) 00:12:33.016 fused_ordering(676) 00:12:33.016 fused_ordering(677) 00:12:33.016 fused_ordering(678) 00:12:33.016 fused_ordering(679) 00:12:33.016 fused_ordering(680) 00:12:33.016 fused_ordering(681) 00:12:33.016 fused_ordering(682) 00:12:33.016 fused_ordering(683) 00:12:33.016 fused_ordering(684) 00:12:33.016 fused_ordering(685) 00:12:33.016 fused_ordering(686) 00:12:33.016 fused_ordering(687) 00:12:33.016 fused_ordering(688) 00:12:33.016 fused_ordering(689) 00:12:33.016 fused_ordering(690) 00:12:33.016 fused_ordering(691) 00:12:33.016 fused_ordering(692) 00:12:33.016 fused_ordering(693) 00:12:33.016 fused_ordering(694) 00:12:33.016 fused_ordering(695) 00:12:33.016 fused_ordering(696) 00:12:33.016 fused_ordering(697) 00:12:33.016 fused_ordering(698) 00:12:33.016 fused_ordering(699) 00:12:33.016 fused_ordering(700) 00:12:33.016 fused_ordering(701) 00:12:33.016 fused_ordering(702) 00:12:33.016 fused_ordering(703) 00:12:33.016 fused_ordering(704) 00:12:33.016 fused_ordering(705) 00:12:33.016 fused_ordering(706) 00:12:33.016 fused_ordering(707) 00:12:33.016 fused_ordering(708) 00:12:33.016 fused_ordering(709) 00:12:33.016 fused_ordering(710) 00:12:33.016 fused_ordering(711) 00:12:33.016 fused_ordering(712) 00:12:33.016 fused_ordering(713) 00:12:33.016 fused_ordering(714) 00:12:33.016 fused_ordering(715) 00:12:33.016 fused_ordering(716) 00:12:33.016 fused_ordering(717) 00:12:33.016 fused_ordering(718) 00:12:33.016 fused_ordering(719) 00:12:33.016 fused_ordering(720) 00:12:33.016 fused_ordering(721) 00:12:33.016 fused_ordering(722) 00:12:33.016 fused_ordering(723) 00:12:33.016 fused_ordering(724) 00:12:33.016 fused_ordering(725) 00:12:33.016 fused_ordering(726) 00:12:33.016 fused_ordering(727) 00:12:33.016 fused_ordering(728) 00:12:33.016 fused_ordering(729) 00:12:33.016 fused_ordering(730) 00:12:33.016 fused_ordering(731) 00:12:33.016 fused_ordering(732) 00:12:33.016 fused_ordering(733) 00:12:33.016 fused_ordering(734) 00:12:33.016 fused_ordering(735) 00:12:33.016 fused_ordering(736) 00:12:33.016 fused_ordering(737) 00:12:33.016 fused_ordering(738) 00:12:33.016 fused_ordering(739) 00:12:33.016 fused_ordering(740) 00:12:33.016 fused_ordering(741) 00:12:33.016 fused_ordering(742) 00:12:33.016 fused_ordering(743) 00:12:33.016 fused_ordering(744) 00:12:33.016 fused_ordering(745) 00:12:33.016 fused_ordering(746) 00:12:33.016 fused_ordering(747) 00:12:33.016 fused_ordering(748) 00:12:33.016 fused_ordering(749) 00:12:33.016 fused_ordering(750) 00:12:33.016 fused_ordering(751) 00:12:33.016 fused_ordering(752) 00:12:33.016 fused_ordering(753) 00:12:33.016 fused_ordering(754) 00:12:33.016 fused_ordering(755) 00:12:33.016 fused_ordering(756) 00:12:33.016 fused_ordering(757) 00:12:33.016 fused_ordering(758) 00:12:33.016 fused_ordering(759) 00:12:33.016 fused_ordering(760) 00:12:33.016 fused_ordering(761) 00:12:33.016 fused_ordering(762) 00:12:33.016 fused_ordering(763) 00:12:33.016 fused_ordering(764) 00:12:33.016 fused_ordering(765) 00:12:33.016 fused_ordering(766) 00:12:33.016 fused_ordering(767) 00:12:33.016 fused_ordering(768) 00:12:33.016 fused_ordering(769) 00:12:33.016 fused_ordering(770) 00:12:33.016 fused_ordering(771) 00:12:33.016 fused_ordering(772) 00:12:33.016 fused_ordering(773) 00:12:33.016 fused_ordering(774) 00:12:33.016 fused_ordering(775) 00:12:33.016 fused_ordering(776) 00:12:33.016 fused_ordering(777) 00:12:33.016 fused_ordering(778) 00:12:33.016 fused_ordering(779) 00:12:33.016 fused_ordering(780) 00:12:33.016 fused_ordering(781) 00:12:33.016 fused_ordering(782) 00:12:33.016 fused_ordering(783) 00:12:33.016 fused_ordering(784) 00:12:33.016 fused_ordering(785) 00:12:33.017 fused_ordering(786) 00:12:33.017 fused_ordering(787) 00:12:33.017 fused_ordering(788) 00:12:33.017 fused_ordering(789) 00:12:33.017 fused_ordering(790) 00:12:33.017 fused_ordering(791) 00:12:33.017 fused_ordering(792) 00:12:33.017 fused_ordering(793) 00:12:33.017 fused_ordering(794) 00:12:33.017 fused_ordering(795) 00:12:33.017 fused_ordering(796) 00:12:33.017 fused_ordering(797) 00:12:33.017 fused_ordering(798) 00:12:33.017 fused_ordering(799) 00:12:33.017 fused_ordering(800) 00:12:33.017 fused_ordering(801) 00:12:33.017 fused_ordering(802) 00:12:33.017 fused_ordering(803) 00:12:33.017 fused_ordering(804) 00:12:33.017 fused_ordering(805) 00:12:33.017 fused_ordering(806) 00:12:33.017 fused_ordering(807) 00:12:33.017 fused_ordering(808) 00:12:33.017 fused_ordering(809) 00:12:33.017 fused_ordering(810) 00:12:33.017 fused_ordering(811) 00:12:33.017 fused_ordering(812) 00:12:33.017 fused_ordering(813) 00:12:33.017 fused_ordering(814) 00:12:33.017 fused_ordering(815) 00:12:33.017 fused_ordering(816) 00:12:33.017 fused_ordering(817) 00:12:33.017 fused_ordering(818) 00:12:33.017 fused_ordering(819) 00:12:33.017 fused_ordering(820) 00:12:33.588 fused_o[2024-11-18 12:55:31.009320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12132f0 is same with the state(6) to be set 00:12:33.588 rdering(821) 00:12:33.588 fused_ordering(822) 00:12:33.588 fused_ordering(823) 00:12:33.588 fused_ordering(824) 00:12:33.588 fused_ordering(825) 00:12:33.588 fused_ordering(826) 00:12:33.588 fused_ordering(827) 00:12:33.588 fused_ordering(828) 00:12:33.588 fused_ordering(829) 00:12:33.588 fused_ordering(830) 00:12:33.588 fused_ordering(831) 00:12:33.588 fused_ordering(832) 00:12:33.588 fused_ordering(833) 00:12:33.588 fused_ordering(834) 00:12:33.588 fused_ordering(835) 00:12:33.588 fused_ordering(836) 00:12:33.588 fused_ordering(837) 00:12:33.588 fused_ordering(838) 00:12:33.588 fused_ordering(839) 00:12:33.588 fused_ordering(840) 00:12:33.588 fused_ordering(841) 00:12:33.588 fused_ordering(842) 00:12:33.588 fused_ordering(843) 00:12:33.588 fused_ordering(844) 00:12:33.588 fused_ordering(845) 00:12:33.588 fused_ordering(846) 00:12:33.588 fused_ordering(847) 00:12:33.588 fused_ordering(848) 00:12:33.588 fused_ordering(849) 00:12:33.588 fused_ordering(850) 00:12:33.588 fused_ordering(851) 00:12:33.588 fused_ordering(852) 00:12:33.588 fused_ordering(853) 00:12:33.588 fused_ordering(854) 00:12:33.588 fused_ordering(855) 00:12:33.588 fused_ordering(856) 00:12:33.588 fused_ordering(857) 00:12:33.588 fused_ordering(858) 00:12:33.588 fused_ordering(859) 00:12:33.588 fused_ordering(860) 00:12:33.588 fused_ordering(861) 00:12:33.588 fused_ordering(862) 00:12:33.588 fused_ordering(863) 00:12:33.588 fused_ordering(864) 00:12:33.588 fused_ordering(865) 00:12:33.588 fused_ordering(866) 00:12:33.588 fused_ordering(867) 00:12:33.588 fused_ordering(868) 00:12:33.588 fused_ordering(869) 00:12:33.588 fused_ordering(870) 00:12:33.588 fused_ordering(871) 00:12:33.588 fused_ordering(872) 00:12:33.588 fused_ordering(873) 00:12:33.588 fused_ordering(874) 00:12:33.588 fused_ordering(875) 00:12:33.588 fused_ordering(876) 00:12:33.588 fused_ordering(877) 00:12:33.589 fused_ordering(878) 00:12:33.589 fused_ordering(879) 00:12:33.589 fused_ordering(880) 00:12:33.589 fused_ordering(881) 00:12:33.589 fused_ordering(882) 00:12:33.589 fused_ordering(883) 00:12:33.589 fused_ordering(884) 00:12:33.589 fused_ordering(885) 00:12:33.589 fused_ordering(886) 00:12:33.589 fused_ordering(887) 00:12:33.589 fused_ordering(888) 00:12:33.589 fused_ordering(889) 00:12:33.589 fused_ordering(890) 00:12:33.589 fused_ordering(891) 00:12:33.589 fused_ordering(892) 00:12:33.589 fused_ordering(893) 00:12:33.589 fused_ordering(894) 00:12:33.589 fused_ordering(895) 00:12:33.589 fused_ordering(896) 00:12:33.589 fused_ordering(897) 00:12:33.589 fused_ordering(898) 00:12:33.589 fused_ordering(899) 00:12:33.589 fused_ordering(900) 00:12:33.589 fused_ordering(901) 00:12:33.589 fused_ordering(902) 00:12:33.589 fused_ordering(903) 00:12:33.589 fused_ordering(904) 00:12:33.589 fused_ordering(905) 00:12:33.589 fused_ordering(906) 00:12:33.589 fused_ordering(907) 00:12:33.589 fused_ordering(908) 00:12:33.589 fused_ordering(909) 00:12:33.589 fused_ordering(910) 00:12:33.589 fused_ordering(911) 00:12:33.589 fused_ordering(912) 00:12:33.589 fused_ordering(913) 00:12:33.589 fused_ordering(914) 00:12:33.589 fused_ordering(915) 00:12:33.589 fused_ordering(916) 00:12:33.589 fused_ordering(917) 00:12:33.589 fused_ordering(918) 00:12:33.589 fused_ordering(919) 00:12:33.589 fused_ordering(920) 00:12:33.589 fused_ordering(921) 00:12:33.589 fused_ordering(922) 00:12:33.589 fused_ordering(923) 00:12:33.589 fused_ordering(924) 00:12:33.589 fused_ordering(925) 00:12:33.589 fused_ordering(926) 00:12:33.589 fused_ordering(927) 00:12:33.589 fused_ordering(928) 00:12:33.589 fused_ordering(929) 00:12:33.589 fused_ordering(930) 00:12:33.589 fused_ordering(931) 00:12:33.589 fused_ordering(932) 00:12:33.589 fused_ordering(933) 00:12:33.589 fused_ordering(934) 00:12:33.589 fused_ordering(935) 00:12:33.589 fused_ordering(936) 00:12:33.589 fused_ordering(937) 00:12:33.589 fused_ordering(938) 00:12:33.589 fused_ordering(939) 00:12:33.589 fused_ordering(940) 00:12:33.589 fused_ordering(941) 00:12:33.589 fused_ordering(942) 00:12:33.589 fused_ordering(943) 00:12:33.589 fused_ordering(944) 00:12:33.589 fused_ordering(945) 00:12:33.589 fused_ordering(946) 00:12:33.589 fused_ordering(947) 00:12:33.589 fused_ordering(948) 00:12:33.589 fused_ordering(949) 00:12:33.589 fused_ordering(950) 00:12:33.589 fused_ordering(951) 00:12:33.589 fused_ordering(952) 00:12:33.589 fused_ordering(953) 00:12:33.589 fused_ordering(954) 00:12:33.589 fused_ordering(955) 00:12:33.589 fused_ordering(956) 00:12:33.589 fused_ordering(957) 00:12:33.589 fused_ordering(958) 00:12:33.589 fused_ordering(959) 00:12:33.589 fused_ordering(960) 00:12:33.589 fused_ordering(961) 00:12:33.589 fused_ordering(962) 00:12:33.589 fused_ordering(963) 00:12:33.589 fused_ordering(964) 00:12:33.589 fused_ordering(965) 00:12:33.589 fused_ordering(966) 00:12:33.589 fused_ordering(967) 00:12:33.589 fused_ordering(968) 00:12:33.589 fused_ordering(969) 00:12:33.589 fused_ordering(970) 00:12:33.589 fused_ordering(971) 00:12:33.589 fused_ordering(972) 00:12:33.589 fused_ordering(973) 00:12:33.589 fused_ordering(974) 00:12:33.589 fused_ordering(975) 00:12:33.589 fused_ordering(976) 00:12:33.589 fused_ordering(977) 00:12:33.589 fused_ordering(978) 00:12:33.589 fused_ordering(979) 00:12:33.589 fused_ordering(980) 00:12:33.589 fused_ordering(981) 00:12:33.589 fused_ordering(982) 00:12:33.589 fused_ordering(983) 00:12:33.589 fused_ordering(984) 00:12:33.589 fused_ordering(985) 00:12:33.589 fused_ordering(986) 00:12:33.589 fused_ordering(987) 00:12:33.589 fused_ordering(988) 00:12:33.589 fused_ordering(989) 00:12:33.589 fused_ordering(990) 00:12:33.589 fused_ordering(991) 00:12:33.589 fused_ordering(992) 00:12:33.589 fused_ordering(993) 00:12:33.589 fused_ordering(994) 00:12:33.589 fused_ordering(995) 00:12:33.589 fused_ordering(996) 00:12:33.589 fused_ordering(997) 00:12:33.589 fused_ordering(998) 00:12:33.589 fused_ordering(999) 00:12:33.589 fused_ordering(1000) 00:12:33.589 fused_ordering(1001) 00:12:33.589 fused_ordering(1002) 00:12:33.589 fused_ordering(1003) 00:12:33.589 fused_ordering(1004) 00:12:33.589 fused_ordering(1005) 00:12:33.589 fused_ordering(1006) 00:12:33.589 fused_ordering(1007) 00:12:33.589 fused_ordering(1008) 00:12:33.589 fused_ordering(1009) 00:12:33.589 fused_ordering(1010) 00:12:33.589 fused_ordering(1011) 00:12:33.589 fused_ordering(1012) 00:12:33.589 fused_ordering(1013) 00:12:33.589 fused_ordering(1014) 00:12:33.589 fused_ordering(1015) 00:12:33.589 fused_ordering(1016) 00:12:33.589 fused_ordering(1017) 00:12:33.589 fused_ordering(1018) 00:12:33.589 fused_ordering(1019) 00:12:33.589 fused_ordering(1020) 00:12:33.589 fused_ordering(1021) 00:12:33.589 fused_ordering(1022) 00:12:33.589 fused_ordering(1023) 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.589 rmmod nvme_tcp 00:12:33.589 rmmod nvme_fabrics 00:12:33.589 rmmod nvme_keyring 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2272940 ']' 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2272940 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 2272940 ']' 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 2272940 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2272940 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2272940' 00:12:33.589 killing process with pid 2272940 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 2272940 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 2272940 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:33.589 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:33.850 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:33.850 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:33.850 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:33.850 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:33.850 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.850 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:33.850 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.850 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.850 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.760 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.760 00:12:35.760 real 0m10.859s 00:12:35.760 user 0m5.212s 00:12:35.760 sys 0m5.915s 00:12:35.760 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:35.760 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:35.760 ************************************ 00:12:35.760 END TEST nvmf_fused_ordering 00:12:35.760 ************************************ 00:12:35.760 12:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:35.760 12:55:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:35.760 12:55:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:35.760 12:55:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.760 ************************************ 00:12:35.760 START TEST nvmf_ns_masking 00:12:35.760 ************************************ 00:12:35.760 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:36.021 * Looking for test storage... 00:12:36.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:36.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.021 --rc genhtml_branch_coverage=1 00:12:36.021 --rc genhtml_function_coverage=1 00:12:36.021 --rc genhtml_legend=1 00:12:36.021 --rc geninfo_all_blocks=1 00:12:36.021 --rc geninfo_unexecuted_blocks=1 00:12:36.021 00:12:36.021 ' 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:36.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.021 --rc genhtml_branch_coverage=1 00:12:36.021 --rc genhtml_function_coverage=1 00:12:36.021 --rc genhtml_legend=1 00:12:36.021 --rc geninfo_all_blocks=1 00:12:36.021 --rc geninfo_unexecuted_blocks=1 00:12:36.021 00:12:36.021 ' 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:36.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.021 --rc genhtml_branch_coverage=1 00:12:36.021 --rc genhtml_function_coverage=1 00:12:36.021 --rc genhtml_legend=1 00:12:36.021 --rc geninfo_all_blocks=1 00:12:36.021 --rc geninfo_unexecuted_blocks=1 00:12:36.021 00:12:36.021 ' 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:36.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.021 --rc genhtml_branch_coverage=1 00:12:36.021 --rc genhtml_function_coverage=1 00:12:36.021 --rc genhtml_legend=1 00:12:36.021 --rc geninfo_all_blocks=1 00:12:36.021 --rc geninfo_unexecuted_blocks=1 00:12:36.021 00:12:36.021 ' 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.021 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=8994261f-edd3-4c59-a63b-b20b0f8bc42c 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=57daacb5-88a9-459c-9eb9-ab20544769b8 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c46e7afc-5cc4-473d-81ca-d4f00d296b5d 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.022 12:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:42.601 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.601 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:42.602 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:42.602 Found net devices under 0000:86:00.0: cvl_0_0 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:42.602 Found net devices under 0000:86:00.1: cvl_0_1 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:42.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:12:42.602 00:12:42.602 --- 10.0.0.2 ping statistics --- 00:12:42.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.602 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:12:42.602 00:12:42.602 --- 10.0.0.1 ping statistics --- 00:12:42.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.602 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2276939 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2276939 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2276939 ']' 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:42.602 [2024-11-18 12:55:39.703668] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:42.602 [2024-11-18 12:55:39.703718] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.602 [2024-11-18 12:55:39.782542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.602 [2024-11-18 12:55:39.825631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.602 [2024-11-18 12:55:39.825663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.602 [2024-11-18 12:55:39.825670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.602 [2024-11-18 12:55:39.825676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.602 [2024-11-18 12:55:39.825683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.602 [2024-11-18 12:55:39.826236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.602 12:55:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:42.602 [2024-11-18 12:55:40.142455] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.603 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:42.603 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:42.603 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:42.863 Malloc1 00:12:42.863 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:43.122 Malloc2 00:12:43.122 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:43.383 12:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:43.383 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.643 [2024-11-18 12:55:41.213670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.643 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:43.643 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c46e7afc-5cc4-473d-81ca-d4f00d296b5d -a 10.0.0.2 -s 4420 -i 4 00:12:43.904 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.904 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:43.904 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.904 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:43.904 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:45.814 [ 0]:0x1 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a0ce366a75d14cc9b6e796ed6319534f 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a0ce366a75d14cc9b6e796ed6319534f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:45.814 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:46.074 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:46.074 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:46.074 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:46.074 [ 0]:0x1 00:12:46.074 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:46.074 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:46.074 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a0ce366a75d14cc9b6e796ed6319534f 00:12:46.074 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a0ce366a75d14cc9b6e796ed6319534f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.074 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:46.074 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:46.074 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:46.074 [ 1]:0x2 00:12:46.074 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:46.074 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:46.334 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=140360fedbc445d583222be4d115d0b3 00:12:46.334 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 140360fedbc445d583222be4d115d0b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.334 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:46.334 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.334 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.593 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:46.593 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:46.593 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c46e7afc-5cc4-473d-81ca-d4f00d296b5d -a 10.0.0.2 -s 4420 -i 4 00:12:46.854 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:46.854 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:46.854 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.854 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:12:46.854 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:12:46.854 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:49.398 [ 0]:0x2 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=140360fedbc445d583222be4d115d0b3 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 140360fedbc445d583222be4d115d0b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:49.398 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:49.398 [ 0]:0x1 00:12:49.398 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:49.398 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:49.398 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a0ce366a75d14cc9b6e796ed6319534f 00:12:49.398 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a0ce366a75d14cc9b6e796ed6319534f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:49.398 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:49.398 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:49.398 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:49.398 [ 1]:0x2 00:12:49.398 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:49.398 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=140360fedbc445d583222be4d115d0b3 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 140360fedbc445d583222be4d115d0b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:49.659 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:49.660 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:49.660 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:49.660 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:49.660 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:49.660 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:49.660 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:49.919 [ 0]:0x2 00:12:49.919 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:49.919 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:49.919 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=140360fedbc445d583222be4d115d0b3 00:12:49.919 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 140360fedbc445d583222be4d115d0b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:49.919 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:49.919 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.919 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:50.179 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:50.179 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c46e7afc-5cc4-473d-81ca-d4f00d296b5d -a 10.0.0.2 -s 4420 -i 4 00:12:50.179 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:50.179 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:50.179 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.179 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:12:50.179 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:12:50.179 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:52.090 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:52.090 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:52.090 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.090 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:12:52.090 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.090 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:52.090 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:52.349 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:52.349 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:52.349 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:52.349 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:52.349 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:52.349 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:52.349 [ 0]:0x1 00:12:52.349 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:52.349 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:52.349 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a0ce366a75d14cc9b6e796ed6319534f 00:12:52.349 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a0ce366a75d14cc9b6e796ed6319534f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:52.349 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:52.349 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:52.349 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:52.609 [ 1]:0x2 00:12:52.609 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:52.609 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:52.609 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=140360fedbc445d583222be4d115d0b3 00:12:52.609 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 140360fedbc445d583222be4d115d0b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:52.609 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:52.869 [ 0]:0x2 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=140360fedbc445d583222be4d115d0b3 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 140360fedbc445d583222be4d115d0b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:52.869 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:53.129 [2024-11-18 12:55:50.624868] nvmf_rpc.c:1892:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:53.129 request: 00:12:53.129 { 00:12:53.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.129 "nsid": 2, 00:12:53.129 "host": "nqn.2016-06.io.spdk:host1", 00:12:53.129 "method": "nvmf_ns_remove_host", 00:12:53.129 "req_id": 1 00:12:53.129 } 00:12:53.129 Got JSON-RPC error response 00:12:53.129 response: 00:12:53.129 { 00:12:53.129 "code": -32602, 00:12:53.129 "message": "Invalid parameters" 00:12:53.129 } 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:53.129 [ 0]:0x2 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=140360fedbc445d583222be4d115d0b3 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 140360fedbc445d583222be4d115d0b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:53.129 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.389 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2278942 00:12:53.389 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.389 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2278942 /var/tmp/host.sock 00:12:53.389 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:53.389 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2278942 ']' 00:12:53.389 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:12:53.389 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:53.389 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:53.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:53.389 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:53.389 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:53.389 [2024-11-18 12:55:51.002449] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:53.389 [2024-11-18 12:55:51.002499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2278942 ] 00:12:53.389 [2024-11-18 12:55:51.079760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.649 [2024-11-18 12:55:51.120664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.221 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:54.221 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:12:54.221 12:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.480 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:54.739 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 8994261f-edd3-4c59-a63b-b20b0f8bc42c 00:12:54.739 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:54.739 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8994261FEDD34C59A63BB20B0F8BC42C -i 00:12:54.999 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 57daacb5-88a9-459c-9eb9-ab20544769b8 00:12:54.999 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:54.999 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 57DAACB588A9459C9EB9AB20544769B8 -i 00:12:54.999 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:55.259 12:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:55.519 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:55.519 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:55.779 nvme0n1 00:12:56.038 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:56.038 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:56.296 nvme1n2 00:12:56.296 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:56.296 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:56.296 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:56.296 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:56.296 12:55:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:56.555 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:56.555 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:56.555 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:56.555 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:56.814 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 8994261f-edd3-4c59-a63b-b20b0f8bc42c == \8\9\9\4\2\6\1\f\-\e\d\d\3\-\4\c\5\9\-\a\6\3\b\-\b\2\0\b\0\f\8\b\c\4\2\c ]] 00:12:56.814 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:56.814 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:56.814 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:56.814 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 57daacb5-88a9-459c-9eb9-ab20544769b8 == \5\7\d\a\a\c\b\5\-\8\8\a\9\-\4\5\9\c\-\9\e\b\9\-\a\b\2\0\5\4\4\7\6\9\b\8 ]] 00:12:56.814 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.073 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:57.332 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 8994261f-edd3-4c59-a63b-b20b0f8bc42c 00:12:57.332 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:57.332 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8994261FEDD34C59A63BB20B0F8BC42C 00:12:57.332 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:57.332 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8994261FEDD34C59A63BB20B0F8BC42C 00:12:57.332 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.332 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.332 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.332 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.332 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.332 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.332 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.332 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:57.332 12:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8994261FEDD34C59A63BB20B0F8BC42C 00:12:57.592 [2024-11-18 12:55:55.077229] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:57.592 [2024-11-18 12:55:55.077263] subsystem.c:2300:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:57.592 [2024-11-18 12:55:55.077273] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.592 request: 00:12:57.592 { 00:12:57.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.592 "namespace": { 00:12:57.592 "bdev_name": "invalid", 00:12:57.592 "nsid": 1, 00:12:57.592 "nguid": "8994261FEDD34C59A63BB20B0F8BC42C", 00:12:57.592 "no_auto_visible": false 00:12:57.592 }, 00:12:57.592 "method": "nvmf_subsystem_add_ns", 00:12:57.592 "req_id": 1 00:12:57.592 } 00:12:57.592 Got JSON-RPC error response 00:12:57.592 response: 00:12:57.592 { 00:12:57.592 "code": -32602, 00:12:57.592 "message": "Invalid parameters" 00:12:57.592 } 00:12:57.592 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:57.592 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:57.592 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:57.592 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:57.592 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 8994261f-edd3-4c59-a63b-b20b0f8bc42c 00:12:57.592 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:57.592 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8994261FEDD34C59A63BB20B0F8BC42C -i 00:12:57.851 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:59.761 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:59.761 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:59.761 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:00.022 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:00.022 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2278942 00:13:00.022 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2278942 ']' 00:13:00.022 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2278942 00:13:00.022 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:13:00.022 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:00.022 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2278942 00:13:00.022 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:00.022 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:00.022 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2278942' 00:13:00.022 killing process with pid 2278942 00:13:00.022 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2278942 00:13:00.022 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2278942 00:13:00.282 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:00.542 rmmod nvme_tcp 00:13:00.542 rmmod nvme_fabrics 00:13:00.542 rmmod nvme_keyring 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2276939 ']' 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2276939 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2276939 ']' 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2276939 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2276939 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2276939' 00:13:00.542 killing process with pid 2276939 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2276939 00:13:00.542 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2276939 00:13:00.802 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:00.802 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:00.802 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:00.802 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:00.802 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:00.802 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:00.802 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:00.802 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:00.802 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:00.802 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.802 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.802 12:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.343 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:03.343 00:13:03.343 real 0m27.032s 00:13:03.343 user 0m32.924s 00:13:03.343 sys 0m7.304s 00:13:03.343 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:03.343 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:03.343 ************************************ 00:13:03.343 END TEST nvmf_ns_masking 00:13:03.343 ************************************ 00:13:03.343 12:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:03.343 12:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:03.343 12:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:03.343 12:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:03.343 12:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.343 ************************************ 00:13:03.343 START TEST nvmf_nvme_cli 00:13:03.343 ************************************ 00:13:03.343 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:03.344 * Looking for test storage... 00:13:03.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:03.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.344 --rc genhtml_branch_coverage=1 00:13:03.344 --rc genhtml_function_coverage=1 00:13:03.344 --rc genhtml_legend=1 00:13:03.344 --rc geninfo_all_blocks=1 00:13:03.344 --rc geninfo_unexecuted_blocks=1 00:13:03.344 00:13:03.344 ' 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:03.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.344 --rc genhtml_branch_coverage=1 00:13:03.344 --rc genhtml_function_coverage=1 00:13:03.344 --rc genhtml_legend=1 00:13:03.344 --rc geninfo_all_blocks=1 00:13:03.344 --rc geninfo_unexecuted_blocks=1 00:13:03.344 00:13:03.344 ' 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:03.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.344 --rc genhtml_branch_coverage=1 00:13:03.344 --rc genhtml_function_coverage=1 00:13:03.344 --rc genhtml_legend=1 00:13:03.344 --rc geninfo_all_blocks=1 00:13:03.344 --rc geninfo_unexecuted_blocks=1 00:13:03.344 00:13:03.344 ' 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:03.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.344 --rc genhtml_branch_coverage=1 00:13:03.344 --rc genhtml_function_coverage=1 00:13:03.344 --rc genhtml_legend=1 00:13:03.344 --rc geninfo_all_blocks=1 00:13:03.344 --rc geninfo_unexecuted_blocks=1 00:13:03.344 00:13:03.344 ' 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:03.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:03.344 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:03.345 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:09.926 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:09.926 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:09.926 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:09.927 Found net devices under 0000:86:00.0: cvl_0_0 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:09.927 Found net devices under 0000:86:00.1: cvl_0_1 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:09.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:13:09.927 00:13:09.927 --- 10.0.0.2 ping statistics --- 00:13:09.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.927 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:13:09.927 00:13:09.927 --- 10.0.0.1 ping statistics --- 00:13:09.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.927 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2283665 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2283665 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 2283665 ']' 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:09.927 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.927 [2024-11-18 12:56:06.779863] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:13:09.927 [2024-11-18 12:56:06.779904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.927 [2024-11-18 12:56:06.856840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.927 [2024-11-18 12:56:06.900962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.927 [2024-11-18 12:56:06.901000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.927 [2024-11-18 12:56:06.901008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.927 [2024-11-18 12:56:06.901014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.927 [2024-11-18 12:56:06.901019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.927 [2024-11-18 12:56:06.902479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.927 [2024-11-18 12:56:06.902520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.927 [2024-11-18 12:56:06.902627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.927 [2024-11-18 12:56:06.902628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.927 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:09.927 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:13:09.927 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:09.927 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:09.927 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.927 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.927 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:09.927 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.927 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.928 [2024-11-18 12:56:07.048102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.928 Malloc0 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.928 Malloc1 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.928 [2024-11-18 12:56:07.137895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:09.928 00:13:09.928 Discovery Log Number of Records 2, Generation counter 2 00:13:09.928 =====Discovery Log Entry 0====== 00:13:09.928 trtype: tcp 00:13:09.928 adrfam: ipv4 00:13:09.928 subtype: current discovery subsystem 00:13:09.928 treq: not required 00:13:09.928 portid: 0 00:13:09.928 trsvcid: 4420 00:13:09.928 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:09.928 traddr: 10.0.0.2 00:13:09.928 eflags: explicit discovery connections, duplicate discovery information 00:13:09.928 sectype: none 00:13:09.928 =====Discovery Log Entry 1====== 00:13:09.928 trtype: tcp 00:13:09.928 adrfam: ipv4 00:13:09.928 subtype: nvme subsystem 00:13:09.928 treq: not required 00:13:09.928 portid: 0 00:13:09.928 trsvcid: 4420 00:13:09.928 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:09.928 traddr: 10.0.0.2 00:13:09.928 eflags: none 00:13:09.928 sectype: none 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:09.928 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.867 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:10.867 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:13:10.867 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.867 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:13:10.867 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:13:10.867 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:13.407 /dev/nvme0n2 ]] 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:13.407 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.407 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.407 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:13:13.407 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:13.407 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:13.668 rmmod nvme_tcp 00:13:13.668 rmmod nvme_fabrics 00:13:13.668 rmmod nvme_keyring 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2283665 ']' 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2283665 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 2283665 ']' 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 2283665 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2283665 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2283665' 00:13:13.668 killing process with pid 2283665 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 2283665 00:13:13.668 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 2283665 00:13:13.928 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:13.928 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:13.928 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:13.928 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:13.928 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:13.928 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:13.928 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:13.928 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:13.928 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:13.928 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.928 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.928 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.835 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:16.094 00:13:16.094 real 0m12.998s 00:13:16.094 user 0m19.904s 00:13:16.094 sys 0m5.107s 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:16.094 ************************************ 00:13:16.094 END TEST nvmf_nvme_cli 00:13:16.094 ************************************ 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:16.094 ************************************ 00:13:16.094 START TEST nvmf_vfio_user 00:13:16.094 ************************************ 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:16.094 * Looking for test storage... 00:13:16.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.094 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:16.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.095 --rc genhtml_branch_coverage=1 00:13:16.095 --rc genhtml_function_coverage=1 00:13:16.095 --rc genhtml_legend=1 00:13:16.095 --rc geninfo_all_blocks=1 00:13:16.095 --rc geninfo_unexecuted_blocks=1 00:13:16.095 00:13:16.095 ' 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:16.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.095 --rc genhtml_branch_coverage=1 00:13:16.095 --rc genhtml_function_coverage=1 00:13:16.095 --rc genhtml_legend=1 00:13:16.095 --rc geninfo_all_blocks=1 00:13:16.095 --rc geninfo_unexecuted_blocks=1 00:13:16.095 00:13:16.095 ' 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:16.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.095 --rc genhtml_branch_coverage=1 00:13:16.095 --rc genhtml_function_coverage=1 00:13:16.095 --rc genhtml_legend=1 00:13:16.095 --rc geninfo_all_blocks=1 00:13:16.095 --rc geninfo_unexecuted_blocks=1 00:13:16.095 00:13:16.095 ' 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:16.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.095 --rc genhtml_branch_coverage=1 00:13:16.095 --rc genhtml_function_coverage=1 00:13:16.095 --rc genhtml_legend=1 00:13:16.095 --rc geninfo_all_blocks=1 00:13:16.095 --rc geninfo_unexecuted_blocks=1 00:13:16.095 00:13:16.095 ' 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.095 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.355 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.355 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.355 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.355 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.355 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:16.355 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:16.355 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.355 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.355 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.355 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.355 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.355 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:16.355 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.355 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:16.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2284957 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2284957' 00:13:16.356 Process pid: 2284957 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2284957 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2284957 ']' 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:16.356 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:16.356 [2024-11-18 12:56:13.872460] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:13:16.356 [2024-11-18 12:56:13.872507] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.356 [2024-11-18 12:56:13.946606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.356 [2024-11-18 12:56:13.987034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.356 [2024-11-18 12:56:13.987073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.356 [2024-11-18 12:56:13.987081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.356 [2024-11-18 12:56:13.987086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.356 [2024-11-18 12:56:13.987094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.356 [2024-11-18 12:56:13.988655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.356 [2024-11-18 12:56:13.988765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.356 [2024-11-18 12:56:13.988848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.356 [2024-11-18 12:56:13.988849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.616 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:16.616 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:13:16.616 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:17.555 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:17.815 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:17.815 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:17.815 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:17.815 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:17.815 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:17.815 Malloc1 00:13:18.075 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:18.075 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:18.335 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:18.594 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:18.595 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:18.595 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:18.854 Malloc2 00:13:18.855 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:19.115 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:19.115 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:19.375 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:19.375 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:19.375 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:19.375 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:19.375 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:19.375 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:19.375 [2024-11-18 12:56:16.996791] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:13:19.375 [2024-11-18 12:56:16.996838] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2285444 ] 00:13:19.375 [2024-11-18 12:56:17.038285] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:19.375 [2024-11-18 12:56:17.042606] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:19.375 [2024-11-18 12:56:17.042626] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3c3e1f6000 00:13:19.375 [2024-11-18 12:56:17.043605] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:19.375 [2024-11-18 12:56:17.044604] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:19.375 [2024-11-18 12:56:17.045610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:19.375 [2024-11-18 12:56:17.046610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:19.375 [2024-11-18 12:56:17.047615] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:19.375 [2024-11-18 12:56:17.048623] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:19.375 [2024-11-18 12:56:17.049635] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:19.375 [2024-11-18 12:56:17.050638] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:19.375 [2024-11-18 12:56:17.051653] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:19.375 [2024-11-18 12:56:17.051666] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3c3e1eb000 00:13:19.375 [2024-11-18 12:56:17.052746] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:19.375 [2024-11-18 12:56:17.064184] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:19.375 [2024-11-18 12:56:17.064210] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:19.375 [2024-11-18 12:56:17.072776] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:19.375 [2024-11-18 12:56:17.072811] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:19.375 [2024-11-18 12:56:17.072881] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:19.375 [2024-11-18 12:56:17.072896] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:19.375 [2024-11-18 12:56:17.072901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:19.638 [2024-11-18 12:56:17.073774] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:19.638 [2024-11-18 12:56:17.073787] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:19.638 [2024-11-18 12:56:17.073797] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:19.638 [2024-11-18 12:56:17.074780] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:19.638 [2024-11-18 12:56:17.074788] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:19.638 [2024-11-18 12:56:17.074794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:19.638 [2024-11-18 12:56:17.075786] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:19.638 [2024-11-18 12:56:17.075794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:19.638 [2024-11-18 12:56:17.076789] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:19.638 [2024-11-18 12:56:17.076797] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:19.638 [2024-11-18 12:56:17.076802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:19.638 [2024-11-18 12:56:17.076808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:19.638 [2024-11-18 12:56:17.076915] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:19.638 [2024-11-18 12:56:17.076920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:19.638 [2024-11-18 12:56:17.076925] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:19.638 [2024-11-18 12:56:17.077796] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:19.638 [2024-11-18 12:56:17.078795] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:19.638 [2024-11-18 12:56:17.079804] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:19.638 [2024-11-18 12:56:17.080807] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:19.638 [2024-11-18 12:56:17.080885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:19.638 [2024-11-18 12:56:17.081816] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:19.638 [2024-11-18 12:56:17.081823] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:19.638 [2024-11-18 12:56:17.081828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:19.638 [2024-11-18 12:56:17.081845] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:19.638 [2024-11-18 12:56:17.081852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:19.638 [2024-11-18 12:56:17.081870] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:19.638 [2024-11-18 12:56:17.081874] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:19.638 [2024-11-18 12:56:17.081879] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:19.638 [2024-11-18 12:56:17.081890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:19.638 [2024-11-18 12:56:17.081936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:19.638 [2024-11-18 12:56:17.081944] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:19.638 [2024-11-18 12:56:17.081949] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:19.638 [2024-11-18 12:56:17.081952] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:19.638 [2024-11-18 12:56:17.081957] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:19.638 [2024-11-18 12:56:17.081961] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:19.638 [2024-11-18 12:56:17.081967] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:19.638 [2024-11-18 12:56:17.081971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:19.638 [2024-11-18 12:56:17.081978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:19.638 [2024-11-18 12:56:17.081987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:19.638 [2024-11-18 12:56:17.082000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:19.638 [2024-11-18 12:56:17.082012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.638 [2024-11-18 12:56:17.082019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.638 [2024-11-18 12:56:17.082027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.638 [2024-11-18 12:56:17.082034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.638 [2024-11-18 12:56:17.082038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:19.638 [2024-11-18 12:56:17.082044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:19.638 [2024-11-18 12:56:17.082052] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:19.638 [2024-11-18 12:56:17.082064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:19.638 [2024-11-18 12:56:17.082070] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:19.638 [2024-11-18 12:56:17.082075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:19.638 [2024-11-18 12:56:17.082082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:19.638 [2024-11-18 12:56:17.082088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:19.638 [2024-11-18 12:56:17.082095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:19.638 [2024-11-18 12:56:17.082105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:19.638 [2024-11-18 12:56:17.082155] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:19.638 [2024-11-18 12:56:17.082162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:19.639 [2024-11-18 12:56:17.082169] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:19.639 [2024-11-18 12:56:17.082173] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:19.639 [2024-11-18 12:56:17.082176] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:19.639 [2024-11-18 12:56:17.082181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:19.639 [2024-11-18 12:56:17.082194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:19.639 [2024-11-18 12:56:17.082202] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:19.639 [2024-11-18 12:56:17.082212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:19.639 [2024-11-18 12:56:17.082219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:19.639 [2024-11-18 12:56:17.082225] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:19.639 [2024-11-18 12:56:17.082229] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:19.639 [2024-11-18 12:56:17.082232] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:19.639 [2024-11-18 12:56:17.082238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:19.639 [2024-11-18 12:56:17.082261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:19.639 [2024-11-18 12:56:17.082272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:19.639 [2024-11-18 12:56:17.082279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:19.639 [2024-11-18 12:56:17.082285] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:19.639 [2024-11-18 12:56:17.082289] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:19.639 [2024-11-18 12:56:17.082292] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:19.639 [2024-11-18 12:56:17.082298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:19.639 [2024-11-18 12:56:17.082309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:19.639 [2024-11-18 12:56:17.082317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:19.639 [2024-11-18 12:56:17.082324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:19.639 [2024-11-18 12:56:17.082331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:19.639 [2024-11-18 12:56:17.082336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:19.639 [2024-11-18 12:56:17.082341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:19.639 [2024-11-18 12:56:17.082345] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:19.639 [2024-11-18 12:56:17.082349] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:19.639 [2024-11-18 12:56:17.082359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:19.639 [2024-11-18 12:56:17.082364] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:19.639 [2024-11-18 12:56:17.082381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:19.639 [2024-11-18 12:56:17.082392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:19.639 [2024-11-18 12:56:17.082403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:19.639 [2024-11-18 12:56:17.082415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:19.639 [2024-11-18 12:56:17.082424] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:19.639 [2024-11-18 12:56:17.082437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:19.639 [2024-11-18 12:56:17.082446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:19.639 [2024-11-18 12:56:17.082456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:19.639 [2024-11-18 12:56:17.082468] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:19.639 [2024-11-18 12:56:17.082472] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:19.639 [2024-11-18 12:56:17.082475] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:19.639 [2024-11-18 12:56:17.082478] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:19.639 [2024-11-18 12:56:17.082481] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:19.639 [2024-11-18 12:56:17.082487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:19.639 [2024-11-18 12:56:17.082493] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:19.639 [2024-11-18 12:56:17.082497] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:19.639 [2024-11-18 12:56:17.082500] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:19.639 [2024-11-18 12:56:17.082506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:19.639 [2024-11-18 12:56:17.082513] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:19.639 [2024-11-18 12:56:17.082517] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:19.639 [2024-11-18 12:56:17.082520] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:19.639 [2024-11-18 12:56:17.082526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:19.639 [2024-11-18 12:56:17.082533] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:19.639 [2024-11-18 12:56:17.082537] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:19.639 [2024-11-18 12:56:17.082540] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:19.639 [2024-11-18 12:56:17.082546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:19.639 [2024-11-18 12:56:17.082552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:19.639 [2024-11-18 12:56:17.082563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:19.639 [2024-11-18 12:56:17.082573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:19.639 [2024-11-18 12:56:17.082579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:19.639 ===================================================== 00:13:19.639 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:19.639 ===================================================== 00:13:19.639 Controller Capabilities/Features 00:13:19.639 ================================ 00:13:19.639 Vendor ID: 4e58 00:13:19.639 Subsystem Vendor ID: 4e58 00:13:19.639 Serial Number: SPDK1 00:13:19.639 Model Number: SPDK bdev Controller 00:13:19.639 Firmware Version: 25.01 00:13:19.639 Recommended Arb Burst: 6 00:13:19.639 IEEE OUI Identifier: 8d 6b 50 00:13:19.639 Multi-path I/O 00:13:19.639 May have multiple subsystem ports: Yes 00:13:19.639 May have multiple controllers: Yes 00:13:19.639 Associated with SR-IOV VF: No 00:13:19.639 Max Data Transfer Size: 131072 00:13:19.639 Max Number of Namespaces: 32 00:13:19.639 Max Number of I/O Queues: 127 00:13:19.639 NVMe Specification Version (VS): 1.3 00:13:19.639 NVMe Specification Version (Identify): 1.3 00:13:19.639 Maximum Queue Entries: 256 00:13:19.639 Contiguous Queues Required: Yes 00:13:19.639 Arbitration Mechanisms Supported 00:13:19.639 Weighted Round Robin: Not Supported 00:13:19.639 Vendor Specific: Not Supported 00:13:19.639 Reset Timeout: 15000 ms 00:13:19.639 Doorbell Stride: 4 bytes 00:13:19.639 NVM Subsystem Reset: Not Supported 00:13:19.639 Command Sets Supported 00:13:19.639 NVM Command Set: Supported 00:13:19.639 Boot Partition: Not Supported 00:13:19.639 Memory Page Size Minimum: 4096 bytes 00:13:19.639 Memory Page Size Maximum: 4096 bytes 00:13:19.639 Persistent Memory Region: Not Supported 00:13:19.639 Optional Asynchronous Events Supported 00:13:19.639 Namespace Attribute Notices: Supported 00:13:19.639 Firmware Activation Notices: Not Supported 00:13:19.639 ANA Change Notices: Not Supported 00:13:19.639 PLE Aggregate Log Change Notices: Not Supported 00:13:19.639 LBA Status Info Alert Notices: Not Supported 00:13:19.639 EGE Aggregate Log Change Notices: Not Supported 00:13:19.639 Normal NVM Subsystem Shutdown event: Not Supported 00:13:19.639 Zone Descriptor Change Notices: Not Supported 00:13:19.639 Discovery Log Change Notices: Not Supported 00:13:19.639 Controller Attributes 00:13:19.639 128-bit Host Identifier: Supported 00:13:19.639 Non-Operational Permissive Mode: Not Supported 00:13:19.639 NVM Sets: Not Supported 00:13:19.639 Read Recovery Levels: Not Supported 00:13:19.639 Endurance Groups: Not Supported 00:13:19.639 Predictable Latency Mode: Not Supported 00:13:19.639 Traffic Based Keep ALive: Not Supported 00:13:19.639 Namespace Granularity: Not Supported 00:13:19.640 SQ Associations: Not Supported 00:13:19.640 UUID List: Not Supported 00:13:19.640 Multi-Domain Subsystem: Not Supported 00:13:19.640 Fixed Capacity Management: Not Supported 00:13:19.640 Variable Capacity Management: Not Supported 00:13:19.640 Delete Endurance Group: Not Supported 00:13:19.640 Delete NVM Set: Not Supported 00:13:19.640 Extended LBA Formats Supported: Not Supported 00:13:19.640 Flexible Data Placement Supported: Not Supported 00:13:19.640 00:13:19.640 Controller Memory Buffer Support 00:13:19.640 ================================ 00:13:19.640 Supported: No 00:13:19.640 00:13:19.640 Persistent Memory Region Support 00:13:19.640 ================================ 00:13:19.640 Supported: No 00:13:19.640 00:13:19.640 Admin Command Set Attributes 00:13:19.640 ============================ 00:13:19.640 Security Send/Receive: Not Supported 00:13:19.640 Format NVM: Not Supported 00:13:19.640 Firmware Activate/Download: Not Supported 00:13:19.640 Namespace Management: Not Supported 00:13:19.640 Device Self-Test: Not Supported 00:13:19.640 Directives: Not Supported 00:13:19.640 NVMe-MI: Not Supported 00:13:19.640 Virtualization Management: Not Supported 00:13:19.640 Doorbell Buffer Config: Not Supported 00:13:19.640 Get LBA Status Capability: Not Supported 00:13:19.640 Command & Feature Lockdown Capability: Not Supported 00:13:19.640 Abort Command Limit: 4 00:13:19.640 Async Event Request Limit: 4 00:13:19.640 Number of Firmware Slots: N/A 00:13:19.640 Firmware Slot 1 Read-Only: N/A 00:13:19.640 Firmware Activation Without Reset: N/A 00:13:19.640 Multiple Update Detection Support: N/A 00:13:19.640 Firmware Update Granularity: No Information Provided 00:13:19.640 Per-Namespace SMART Log: No 00:13:19.640 Asymmetric Namespace Access Log Page: Not Supported 00:13:19.640 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:19.640 Command Effects Log Page: Supported 00:13:19.640 Get Log Page Extended Data: Supported 00:13:19.640 Telemetry Log Pages: Not Supported 00:13:19.640 Persistent Event Log Pages: Not Supported 00:13:19.640 Supported Log Pages Log Page: May Support 00:13:19.640 Commands Supported & Effects Log Page: Not Supported 00:13:19.640 Feature Identifiers & Effects Log Page:May Support 00:13:19.640 NVMe-MI Commands & Effects Log Page: May Support 00:13:19.640 Data Area 4 for Telemetry Log: Not Supported 00:13:19.640 Error Log Page Entries Supported: 128 00:13:19.640 Keep Alive: Supported 00:13:19.640 Keep Alive Granularity: 10000 ms 00:13:19.640 00:13:19.640 NVM Command Set Attributes 00:13:19.640 ========================== 00:13:19.640 Submission Queue Entry Size 00:13:19.640 Max: 64 00:13:19.640 Min: 64 00:13:19.640 Completion Queue Entry Size 00:13:19.640 Max: 16 00:13:19.640 Min: 16 00:13:19.640 Number of Namespaces: 32 00:13:19.640 Compare Command: Supported 00:13:19.640 Write Uncorrectable Command: Not Supported 00:13:19.640 Dataset Management Command: Supported 00:13:19.640 Write Zeroes Command: Supported 00:13:19.640 Set Features Save Field: Not Supported 00:13:19.640 Reservations: Not Supported 00:13:19.640 Timestamp: Not Supported 00:13:19.640 Copy: Supported 00:13:19.640 Volatile Write Cache: Present 00:13:19.640 Atomic Write Unit (Normal): 1 00:13:19.640 Atomic Write Unit (PFail): 1 00:13:19.640 Atomic Compare & Write Unit: 1 00:13:19.640 Fused Compare & Write: Supported 00:13:19.640 Scatter-Gather List 00:13:19.640 SGL Command Set: Supported (Dword aligned) 00:13:19.640 SGL Keyed: Not Supported 00:13:19.640 SGL Bit Bucket Descriptor: Not Supported 00:13:19.640 SGL Metadata Pointer: Not Supported 00:13:19.640 Oversized SGL: Not Supported 00:13:19.640 SGL Metadata Address: Not Supported 00:13:19.640 SGL Offset: Not Supported 00:13:19.640 Transport SGL Data Block: Not Supported 00:13:19.640 Replay Protected Memory Block: Not Supported 00:13:19.640 00:13:19.640 Firmware Slot Information 00:13:19.640 ========================= 00:13:19.640 Active slot: 1 00:13:19.640 Slot 1 Firmware Revision: 25.01 00:13:19.640 00:13:19.640 00:13:19.640 Commands Supported and Effects 00:13:19.640 ============================== 00:13:19.640 Admin Commands 00:13:19.640 -------------- 00:13:19.640 Get Log Page (02h): Supported 00:13:19.640 Identify (06h): Supported 00:13:19.640 Abort (08h): Supported 00:13:19.640 Set Features (09h): Supported 00:13:19.640 Get Features (0Ah): Supported 00:13:19.640 Asynchronous Event Request (0Ch): Supported 00:13:19.640 Keep Alive (18h): Supported 00:13:19.640 I/O Commands 00:13:19.640 ------------ 00:13:19.640 Flush (00h): Supported LBA-Change 00:13:19.640 Write (01h): Supported LBA-Change 00:13:19.640 Read (02h): Supported 00:13:19.640 Compare (05h): Supported 00:13:19.640 Write Zeroes (08h): Supported LBA-Change 00:13:19.640 Dataset Management (09h): Supported LBA-Change 00:13:19.640 Copy (19h): Supported LBA-Change 00:13:19.640 00:13:19.640 Error Log 00:13:19.640 ========= 00:13:19.640 00:13:19.640 Arbitration 00:13:19.640 =========== 00:13:19.640 Arbitration Burst: 1 00:13:19.640 00:13:19.640 Power Management 00:13:19.640 ================ 00:13:19.640 Number of Power States: 1 00:13:19.640 Current Power State: Power State #0 00:13:19.640 Power State #0: 00:13:19.640 Max Power: 0.00 W 00:13:19.640 Non-Operational State: Operational 00:13:19.640 Entry Latency: Not Reported 00:13:19.640 Exit Latency: Not Reported 00:13:19.640 Relative Read Throughput: 0 00:13:19.640 Relative Read Latency: 0 00:13:19.640 Relative Write Throughput: 0 00:13:19.640 Relative Write Latency: 0 00:13:19.640 Idle Power: Not Reported 00:13:19.640 Active Power: Not Reported 00:13:19.640 Non-Operational Permissive Mode: Not Supported 00:13:19.640 00:13:19.640 Health Information 00:13:19.640 ================== 00:13:19.640 Critical Warnings: 00:13:19.640 Available Spare Space: OK 00:13:19.640 Temperature: OK 00:13:19.640 Device Reliability: OK 00:13:19.640 Read Only: No 00:13:19.640 Volatile Memory Backup: OK 00:13:19.640 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:19.640 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:19.640 Available Spare: 0% 00:13:19.640 Available Sp[2024-11-18 12:56:17.082662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:19.640 [2024-11-18 12:56:17.082669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:19.640 [2024-11-18 12:56:17.082692] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:19.640 [2024-11-18 12:56:17.082700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.640 [2024-11-18 12:56:17.082706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.640 [2024-11-18 12:56:17.082711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.640 [2024-11-18 12:56:17.082716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.640 [2024-11-18 12:56:17.082823] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:19.640 [2024-11-18 12:56:17.082832] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:19.640 [2024-11-18 12:56:17.083827] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:19.640 [2024-11-18 12:56:17.083873] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:19.640 [2024-11-18 12:56:17.083879] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:19.640 [2024-11-18 12:56:17.084834] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:19.640 [2024-11-18 12:56:17.084844] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:19.640 [2024-11-18 12:56:17.084892] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:19.640 [2024-11-18 12:56:17.086863] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:19.640 are Threshold: 0% 00:13:19.640 Life Percentage Used: 0% 00:13:19.640 Data Units Read: 0 00:13:19.640 Data Units Written: 0 00:13:19.640 Host Read Commands: 0 00:13:19.640 Host Write Commands: 0 00:13:19.640 Controller Busy Time: 0 minutes 00:13:19.640 Power Cycles: 0 00:13:19.640 Power On Hours: 0 hours 00:13:19.640 Unsafe Shutdowns: 0 00:13:19.640 Unrecoverable Media Errors: 0 00:13:19.640 Lifetime Error Log Entries: 0 00:13:19.640 Warning Temperature Time: 0 minutes 00:13:19.640 Critical Temperature Time: 0 minutes 00:13:19.640 00:13:19.640 Number of Queues 00:13:19.640 ================ 00:13:19.640 Number of I/O Submission Queues: 127 00:13:19.640 Number of I/O Completion Queues: 127 00:13:19.640 00:13:19.641 Active Namespaces 00:13:19.641 ================= 00:13:19.641 Namespace ID:1 00:13:19.641 Error Recovery Timeout: Unlimited 00:13:19.641 Command Set Identifier: NVM (00h) 00:13:19.641 Deallocate: Supported 00:13:19.641 Deallocated/Unwritten Error: Not Supported 00:13:19.641 Deallocated Read Value: Unknown 00:13:19.641 Deallocate in Write Zeroes: Not Supported 00:13:19.641 Deallocated Guard Field: 0xFFFF 00:13:19.641 Flush: Supported 00:13:19.641 Reservation: Supported 00:13:19.641 Namespace Sharing Capabilities: Multiple Controllers 00:13:19.641 Size (in LBAs): 131072 (0GiB) 00:13:19.641 Capacity (in LBAs): 131072 (0GiB) 00:13:19.641 Utilization (in LBAs): 131072 (0GiB) 00:13:19.641 NGUID: EC14D1F6E1044226B715547B553D1400 00:13:19.641 UUID: ec14d1f6-e104-4226-b715-547b553d1400 00:13:19.641 Thin Provisioning: Not Supported 00:13:19.641 Per-NS Atomic Units: Yes 00:13:19.641 Atomic Boundary Size (Normal): 0 00:13:19.641 Atomic Boundary Size (PFail): 0 00:13:19.641 Atomic Boundary Offset: 0 00:13:19.641 Maximum Single Source Range Length: 65535 00:13:19.641 Maximum Copy Length: 65535 00:13:19.641 Maximum Source Range Count: 1 00:13:19.641 NGUID/EUI64 Never Reused: No 00:13:19.641 Namespace Write Protected: No 00:13:19.641 Number of LBA Formats: 1 00:13:19.641 Current LBA Format: LBA Format #00 00:13:19.641 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:19.641 00:13:19.641 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:19.641 [2024-11-18 12:56:17.324231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:24.933 Initializing NVMe Controllers 00:13:24.933 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:24.933 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:24.933 Initialization complete. Launching workers. 00:13:24.933 ======================================================== 00:13:24.933 Latency(us) 00:13:24.933 Device Information : IOPS MiB/s Average min max 00:13:24.933 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39919.97 155.94 3208.87 979.44 10341.91 00:13:24.933 ======================================================== 00:13:24.933 Total : 39919.97 155.94 3208.87 979.44 10341.91 00:13:24.933 00:13:24.933 [2024-11-18 12:56:22.347768] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:24.933 12:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:24.933 [2024-11-18 12:56:22.583841] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:30.229 Initializing NVMe Controllers 00:13:30.229 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:30.229 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:30.229 Initialization complete. Launching workers. 00:13:30.229 ======================================================== 00:13:30.229 Latency(us) 00:13:30.229 Device Information : IOPS MiB/s Average min max 00:13:30.229 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16045.65 62.68 7976.55 4984.20 10979.01 00:13:30.229 ======================================================== 00:13:30.229 Total : 16045.65 62.68 7976.55 4984.20 10979.01 00:13:30.229 00:13:30.229 [2024-11-18 12:56:27.619770] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:30.229 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:30.229 [2024-11-18 12:56:27.823740] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:35.512 [2024-11-18 12:56:32.889651] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:35.512 Initializing NVMe Controllers 00:13:35.512 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:35.512 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:35.512 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:35.512 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:35.512 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:35.512 Initialization complete. Launching workers. 00:13:35.512 Starting thread on core 2 00:13:35.512 Starting thread on core 3 00:13:35.512 Starting thread on core 1 00:13:35.512 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:35.512 [2024-11-18 12:56:33.183986] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:38.808 [2024-11-18 12:56:36.246608] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:38.808 Initializing NVMe Controllers 00:13:38.808 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:38.808 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:38.808 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:38.808 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:38.808 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:38.808 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:38.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:38.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:38.808 Initialization complete. Launching workers. 00:13:38.808 Starting thread on core 1 with urgent priority queue 00:13:38.808 Starting thread on core 2 with urgent priority queue 00:13:38.808 Starting thread on core 3 with urgent priority queue 00:13:38.808 Starting thread on core 0 with urgent priority queue 00:13:38.808 SPDK bdev Controller (SPDK1 ) core 0: 7203.00 IO/s 13.88 secs/100000 ios 00:13:38.808 SPDK bdev Controller (SPDK1 ) core 1: 6433.67 IO/s 15.54 secs/100000 ios 00:13:38.808 SPDK bdev Controller (SPDK1 ) core 2: 6271.33 IO/s 15.95 secs/100000 ios 00:13:38.808 SPDK bdev Controller (SPDK1 ) core 3: 5255.67 IO/s 19.03 secs/100000 ios 00:13:38.808 ======================================================== 00:13:38.808 00:13:38.808 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:39.069 [2024-11-18 12:56:36.535772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:39.069 Initializing NVMe Controllers 00:13:39.069 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:39.069 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:39.069 Namespace ID: 1 size: 0GB 00:13:39.069 Initialization complete. 00:13:39.069 INFO: using host memory buffer for IO 00:13:39.069 Hello world! 00:13:39.069 [2024-11-18 12:56:36.570029] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:39.069 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:39.329 [2024-11-18 12:56:36.853795] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:40.269 Initializing NVMe Controllers 00:13:40.269 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:40.269 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:40.269 Initialization complete. Launching workers. 00:13:40.269 submit (in ns) avg, min, max = 9391.7, 3252.2, 4000479.1 00:13:40.269 complete (in ns) avg, min, max = 17660.3, 1819.1, 3999673.9 00:13:40.269 00:13:40.269 Submit histogram 00:13:40.270 ================ 00:13:40.270 Range in us Cumulative Count 00:13:40.270 3.242 - 3.256: 0.0062% ( 1) 00:13:40.270 3.270 - 3.283: 0.1110% ( 17) 00:13:40.270 3.283 - 3.297: 0.4811% ( 60) 00:13:40.270 3.297 - 3.311: 1.2707% ( 128) 00:13:40.270 3.311 - 3.325: 3.2013% ( 313) 00:13:40.270 3.325 - 3.339: 7.3526% ( 673) 00:13:40.270 3.339 - 3.353: 12.4784% ( 831) 00:13:40.270 3.353 - 3.367: 18.3814% ( 957) 00:13:40.270 3.367 - 3.381: 24.5559% ( 1001) 00:13:40.270 3.381 - 3.395: 30.6008% ( 980) 00:13:40.270 3.395 - 3.409: 35.5107% ( 796) 00:13:40.270 3.409 - 3.423: 40.9758% ( 886) 00:13:40.270 3.423 - 3.437: 46.1942% ( 846) 00:13:40.270 3.437 - 3.450: 50.9931% ( 778) 00:13:40.270 3.450 - 3.464: 55.1567% ( 675) 00:13:40.270 3.464 - 3.478: 60.9672% ( 942) 00:13:40.270 3.478 - 3.492: 67.3390% ( 1033) 00:13:40.270 3.492 - 3.506: 71.5026% ( 675) 00:13:40.270 3.506 - 3.520: 76.2707% ( 773) 00:13:40.270 3.520 - 3.534: 80.6933% ( 717) 00:13:40.270 3.534 - 3.548: 83.6664% ( 482) 00:13:40.270 3.548 - 3.562: 85.5848% ( 311) 00:13:40.270 3.562 - 3.590: 87.3427% ( 285) 00:13:40.270 3.590 - 3.617: 88.1199% ( 126) 00:13:40.270 3.617 - 3.645: 89.3351% ( 197) 00:13:40.270 3.645 - 3.673: 90.9388% ( 260) 00:13:40.270 3.673 - 3.701: 92.9065% ( 319) 00:13:40.270 3.701 - 3.729: 94.5349% ( 264) 00:13:40.270 3.729 - 3.757: 96.1633% ( 264) 00:13:40.270 3.757 - 3.784: 97.5142% ( 219) 00:13:40.270 3.784 - 3.812: 98.4024% ( 144) 00:13:40.270 3.812 - 3.840: 98.8897% ( 79) 00:13:40.270 3.840 - 3.868: 99.2660% ( 61) 00:13:40.270 3.868 - 3.896: 99.4819% ( 35) 00:13:40.270 3.896 - 3.923: 99.5682% ( 14) 00:13:40.270 3.923 - 3.951: 99.5991% ( 5) 00:13:40.270 3.951 - 3.979: 99.6052% ( 1) 00:13:40.270 5.454 - 5.482: 99.6114% ( 1) 00:13:40.270 5.565 - 5.593: 99.6176% ( 1) 00:13:40.270 5.593 - 5.621: 99.6237% ( 1) 00:13:40.270 5.760 - 5.788: 99.6299% ( 1) 00:13:40.270 5.843 - 5.871: 99.6361% ( 1) 00:13:40.270 6.010 - 6.038: 99.6484% ( 2) 00:13:40.270 6.038 - 6.066: 99.6546% ( 1) 00:13:40.270 6.066 - 6.094: 99.6607% ( 1) 00:13:40.270 6.317 - 6.344: 99.6731% ( 2) 00:13:40.270 6.483 - 6.511: 99.6854% ( 2) 00:13:40.270 6.790 - 6.817: 99.6916% ( 1) 00:13:40.270 6.984 - 7.012: 99.6978% ( 1) 00:13:40.270 7.179 - 7.235: 99.7039% ( 1) 00:13:40.270 7.290 - 7.346: 99.7101% ( 1) 00:13:40.270 7.346 - 7.402: 99.7163% ( 1) 00:13:40.270 7.513 - 7.569: 99.7224% ( 1) 00:13:40.270 7.736 - 7.791: 99.7286% ( 1) 00:13:40.270 7.847 - 7.903: 99.7348% ( 1) 00:13:40.270 8.125 - 8.181: 99.7533% ( 3) 00:13:40.270 8.237 - 8.292: 99.7594% ( 1) 00:13:40.270 8.403 - 8.459: 99.7656% ( 1) 00:13:40.270 8.459 - 8.515: 99.7718% ( 1) 00:13:40.270 8.682 - 8.737: 99.7779% ( 1) 00:13:40.270 8.737 - 8.793: 99.7841% ( 1) 00:13:40.270 8.849 - 8.904: 99.7903% ( 1) 00:13:40.270 8.904 - 8.960: 99.7964% ( 1) 00:13:40.270 8.960 - 9.016: 99.8026% ( 1) 00:13:40.270 9.016 - 9.071: 99.8088% ( 1) 00:13:40.270 9.071 - 9.127: 99.8150% ( 1) 00:13:40.270 9.238 - 9.294: 99.8211% ( 1) 00:13:40.270 9.294 - 9.350: 99.8273% ( 1) 00:13:40.270 9.461 - 9.517: 99.8335% ( 1) 00:13:40.270 9.517 - 9.572: 99.8396% ( 1) 00:13:40.270 9.962 - 10.017: 99.8458% ( 1) 00:13:40.270 16.362 - 16.473: 99.8520% ( 1) 00:13:40.270 3989.148 - 4017.642: 100.0000% ( 24) 00:13:40.270 00:13:40.270 Complete histogram 00:13:40.270 ================== 00:13:40.270 Range in us Cumulative Count 00:13:40.270 1.809 - 1.823: 0.0123% ( 2) 00:13:40.270 1.823 - 1.837: 0.6045% ( 96) 00:13:40.270 1.837 - 1.850: 1.9492% ( 218) 00:13:40.270 1.850 - 1.864: 5.2985% ( 543) 00:13:40.270 1.864 - [2024-11-18 12:56:37.874636] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:40.270 1.878: 50.1172% ( 7266) 00:13:40.270 1.878 - 1.892: 88.0952% ( 6157) 00:13:40.270 1.892 - 1.906: 94.1648% ( 984) 00:13:40.270 1.906 - 1.920: 96.6445% ( 402) 00:13:40.270 1.920 - 1.934: 97.2490% ( 98) 00:13:40.270 1.934 - 1.948: 97.9213% ( 109) 00:13:40.270 1.948 - 1.962: 98.7849% ( 140) 00:13:40.270 1.962 - 1.976: 99.2660% ( 78) 00:13:40.270 1.976 - 1.990: 99.3585% ( 15) 00:13:40.270 1.990 - 2.003: 99.3832% ( 4) 00:13:40.270 2.031 - 2.045: 99.3893% ( 1) 00:13:40.270 2.045 - 2.059: 99.3955% ( 1) 00:13:40.270 2.059 - 2.073: 99.4017% ( 1) 00:13:40.270 2.115 - 2.129: 99.4078% ( 1) 00:13:40.270 2.463 - 2.477: 99.4140% ( 1) 00:13:40.270 4.035 - 4.063: 99.4202% ( 1) 00:13:40.270 4.146 - 4.174: 99.4264% ( 1) 00:13:40.270 4.230 - 4.257: 99.4325% ( 1) 00:13:40.270 4.285 - 4.313: 99.4387% ( 1) 00:13:40.270 4.313 - 4.341: 99.4449% ( 1) 00:13:40.270 4.508 - 4.536: 99.4510% ( 1) 00:13:40.270 4.758 - 4.786: 99.4572% ( 1) 00:13:40.270 4.814 - 4.842: 99.4634% ( 1) 00:13:40.270 4.953 - 4.981: 99.4695% ( 1) 00:13:40.270 5.037 - 5.064: 99.4757% ( 1) 00:13:40.270 5.120 - 5.148: 99.4819% ( 1) 00:13:40.270 5.510 - 5.537: 99.4880% ( 1) 00:13:40.270 5.677 - 5.704: 99.4942% ( 1) 00:13:40.270 5.732 - 5.760: 99.5004% ( 1) 00:13:40.270 6.400 - 6.428: 99.5065% ( 1) 00:13:40.270 6.483 - 6.511: 99.5127% ( 1) 00:13:40.270 6.817 - 6.845: 99.5189% ( 1) 00:13:40.270 7.012 - 7.040: 99.5250% ( 1) 00:13:40.270 7.179 - 7.235: 99.5374% ( 2) 00:13:40.270 7.235 - 7.290: 99.5435% ( 1) 00:13:40.270 7.290 - 7.346: 99.5497% ( 1) 00:13:40.270 7.457 - 7.513: 99.5621% ( 2) 00:13:40.270 7.624 - 7.680: 99.5682% ( 1) 00:13:40.270 7.680 - 7.736: 99.5867% ( 3) 00:13:40.270 8.682 - 8.737: 99.5929% ( 1) 00:13:40.270 15.026 - 15.137: 99.6052% ( 2) 00:13:40.270 3989.148 - 4017.642: 100.0000% ( 64) 00:13:40.270 00:13:40.270 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:40.270 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:40.270 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:40.270 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:40.270 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:40.531 [ 00:13:40.531 { 00:13:40.531 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:40.531 "subtype": "Discovery", 00:13:40.531 "listen_addresses": [], 00:13:40.531 "allow_any_host": true, 00:13:40.531 "hosts": [] 00:13:40.531 }, 00:13:40.531 { 00:13:40.531 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:40.531 "subtype": "NVMe", 00:13:40.531 "listen_addresses": [ 00:13:40.531 { 00:13:40.531 "trtype": "VFIOUSER", 00:13:40.531 "adrfam": "IPv4", 00:13:40.531 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:40.531 "trsvcid": "0" 00:13:40.531 } 00:13:40.531 ], 00:13:40.531 "allow_any_host": true, 00:13:40.531 "hosts": [], 00:13:40.531 "serial_number": "SPDK1", 00:13:40.531 "model_number": "SPDK bdev Controller", 00:13:40.531 "max_namespaces": 32, 00:13:40.531 "min_cntlid": 1, 00:13:40.531 "max_cntlid": 65519, 00:13:40.531 "namespaces": [ 00:13:40.531 { 00:13:40.531 "nsid": 1, 00:13:40.531 "bdev_name": "Malloc1", 00:13:40.531 "name": "Malloc1", 00:13:40.531 "nguid": "EC14D1F6E1044226B715547B553D1400", 00:13:40.531 "uuid": "ec14d1f6-e104-4226-b715-547b553d1400" 00:13:40.531 } 00:13:40.531 ] 00:13:40.531 }, 00:13:40.531 { 00:13:40.531 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:40.531 "subtype": "NVMe", 00:13:40.531 "listen_addresses": [ 00:13:40.531 { 00:13:40.531 "trtype": "VFIOUSER", 00:13:40.531 "adrfam": "IPv4", 00:13:40.531 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:40.531 "trsvcid": "0" 00:13:40.531 } 00:13:40.531 ], 00:13:40.531 "allow_any_host": true, 00:13:40.531 "hosts": [], 00:13:40.531 "serial_number": "SPDK2", 00:13:40.531 "model_number": "SPDK bdev Controller", 00:13:40.531 "max_namespaces": 32, 00:13:40.531 "min_cntlid": 1, 00:13:40.531 "max_cntlid": 65519, 00:13:40.531 "namespaces": [ 00:13:40.531 { 00:13:40.531 "nsid": 1, 00:13:40.531 "bdev_name": "Malloc2", 00:13:40.531 "name": "Malloc2", 00:13:40.531 "nguid": "CC698650B92E4C38BF4F113005FE986C", 00:13:40.531 "uuid": "cc698650-b92e-4c38-bf4f-113005fe986c" 00:13:40.531 } 00:13:40.531 ] 00:13:40.531 } 00:13:40.531 ] 00:13:40.531 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:40.531 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:40.531 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2288892 00:13:40.531 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:40.531 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:13:40.531 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:40.531 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:40.531 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:13:40.531 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:40.531 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:40.791 [2024-11-18 12:56:38.270785] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:40.791 Malloc3 00:13:40.791 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:41.051 [2024-11-18 12:56:38.536827] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:41.051 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:41.051 Asynchronous Event Request test 00:13:41.051 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:41.051 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:41.051 Registering asynchronous event callbacks... 00:13:41.051 Starting namespace attribute notice tests for all controllers... 00:13:41.051 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:41.051 aer_cb - Changed Namespace 00:13:41.051 Cleaning up... 00:13:41.051 [ 00:13:41.051 { 00:13:41.051 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:41.051 "subtype": "Discovery", 00:13:41.051 "listen_addresses": [], 00:13:41.051 "allow_any_host": true, 00:13:41.051 "hosts": [] 00:13:41.051 }, 00:13:41.051 { 00:13:41.051 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:41.051 "subtype": "NVMe", 00:13:41.051 "listen_addresses": [ 00:13:41.051 { 00:13:41.051 "trtype": "VFIOUSER", 00:13:41.051 "adrfam": "IPv4", 00:13:41.051 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:41.051 "trsvcid": "0" 00:13:41.051 } 00:13:41.051 ], 00:13:41.051 "allow_any_host": true, 00:13:41.051 "hosts": [], 00:13:41.051 "serial_number": "SPDK1", 00:13:41.051 "model_number": "SPDK bdev Controller", 00:13:41.051 "max_namespaces": 32, 00:13:41.051 "min_cntlid": 1, 00:13:41.051 "max_cntlid": 65519, 00:13:41.051 "namespaces": [ 00:13:41.051 { 00:13:41.051 "nsid": 1, 00:13:41.051 "bdev_name": "Malloc1", 00:13:41.051 "name": "Malloc1", 00:13:41.051 "nguid": "EC14D1F6E1044226B715547B553D1400", 00:13:41.051 "uuid": "ec14d1f6-e104-4226-b715-547b553d1400" 00:13:41.051 }, 00:13:41.051 { 00:13:41.051 "nsid": 2, 00:13:41.051 "bdev_name": "Malloc3", 00:13:41.051 "name": "Malloc3", 00:13:41.051 "nguid": "165AC7F12A3F437094AD76B24D3FA86E", 00:13:41.051 "uuid": "165ac7f1-2a3f-4370-94ad-76b24d3fa86e" 00:13:41.051 } 00:13:41.051 ] 00:13:41.051 }, 00:13:41.051 { 00:13:41.051 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:41.051 "subtype": "NVMe", 00:13:41.051 "listen_addresses": [ 00:13:41.051 { 00:13:41.051 "trtype": "VFIOUSER", 00:13:41.051 "adrfam": "IPv4", 00:13:41.051 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:41.051 "trsvcid": "0" 00:13:41.051 } 00:13:41.051 ], 00:13:41.051 "allow_any_host": true, 00:13:41.051 "hosts": [], 00:13:41.051 "serial_number": "SPDK2", 00:13:41.051 "model_number": "SPDK bdev Controller", 00:13:41.051 "max_namespaces": 32, 00:13:41.051 "min_cntlid": 1, 00:13:41.051 "max_cntlid": 65519, 00:13:41.051 "namespaces": [ 00:13:41.051 { 00:13:41.051 "nsid": 1, 00:13:41.051 "bdev_name": "Malloc2", 00:13:41.051 "name": "Malloc2", 00:13:41.051 "nguid": "CC698650B92E4C38BF4F113005FE986C", 00:13:41.051 "uuid": "cc698650-b92e-4c38-bf4f-113005fe986c" 00:13:41.051 } 00:13:41.051 ] 00:13:41.051 } 00:13:41.051 ] 00:13:41.313 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2288892 00:13:41.313 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:41.313 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:41.313 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:41.313 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:41.313 [2024-11-18 12:56:38.800964] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:13:41.313 [2024-11-18 12:56:38.801011] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289099 ] 00:13:41.313 [2024-11-18 12:56:38.843046] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:41.313 [2024-11-18 12:56:38.845286] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:41.313 [2024-11-18 12:56:38.845306] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f17d7a9b000 00:13:41.313 [2024-11-18 12:56:38.846293] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:41.313 [2024-11-18 12:56:38.847302] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:41.313 [2024-11-18 12:56:38.848307] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:41.313 [2024-11-18 12:56:38.849315] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:41.313 [2024-11-18 12:56:38.850317] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:41.313 [2024-11-18 12:56:38.851325] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:41.313 [2024-11-18 12:56:38.852326] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:41.313 [2024-11-18 12:56:38.856357] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:41.313 [2024-11-18 12:56:38.856386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:41.313 [2024-11-18 12:56:38.856399] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f17d7a90000 00:13:41.313 [2024-11-18 12:56:38.857337] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:41.313 [2024-11-18 12:56:38.866847] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:41.313 [2024-11-18 12:56:38.866871] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:41.313 [2024-11-18 12:56:38.871962] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:41.313 [2024-11-18 12:56:38.872003] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:41.313 [2024-11-18 12:56:38.872075] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:41.313 [2024-11-18 12:56:38.872088] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:41.313 [2024-11-18 12:56:38.872093] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:41.313 [2024-11-18 12:56:38.872961] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:41.313 [2024-11-18 12:56:38.872971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:41.313 [2024-11-18 12:56:38.872978] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:41.313 [2024-11-18 12:56:38.873972] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:41.313 [2024-11-18 12:56:38.873981] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:41.313 [2024-11-18 12:56:38.873988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:41.314 [2024-11-18 12:56:38.874977] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:41.314 [2024-11-18 12:56:38.874986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:41.314 [2024-11-18 12:56:38.875982] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:41.314 [2024-11-18 12:56:38.875991] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:41.314 [2024-11-18 12:56:38.875995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:41.314 [2024-11-18 12:56:38.876001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:41.314 [2024-11-18 12:56:38.876109] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:41.314 [2024-11-18 12:56:38.876115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:41.314 [2024-11-18 12:56:38.876120] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:41.314 [2024-11-18 12:56:38.876993] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:41.314 [2024-11-18 12:56:38.877998] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:41.314 [2024-11-18 12:56:38.879011] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:41.314 [2024-11-18 12:56:38.880011] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:41.314 [2024-11-18 12:56:38.880050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:41.314 [2024-11-18 12:56:38.881028] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:41.314 [2024-11-18 12:56:38.881037] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:41.314 [2024-11-18 12:56:38.881042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.881059] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:41.314 [2024-11-18 12:56:38.881066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.881077] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:41.314 [2024-11-18 12:56:38.881081] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:41.314 [2024-11-18 12:56:38.881084] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:41.314 [2024-11-18 12:56:38.881096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:41.314 [2024-11-18 12:56:38.888360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:41.314 [2024-11-18 12:56:38.888371] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:41.314 [2024-11-18 12:56:38.888376] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:41.314 [2024-11-18 12:56:38.888380] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:41.314 [2024-11-18 12:56:38.888384] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:41.314 [2024-11-18 12:56:38.888388] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:41.314 [2024-11-18 12:56:38.888395] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:41.314 [2024-11-18 12:56:38.888399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.888406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.888416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:41.314 [2024-11-18 12:56:38.896358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:41.314 [2024-11-18 12:56:38.896372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.314 [2024-11-18 12:56:38.896380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.314 [2024-11-18 12:56:38.896388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.314 [2024-11-18 12:56:38.896395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.314 [2024-11-18 12:56:38.896399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.896405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.896414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:41.314 [2024-11-18 12:56:38.904358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:41.314 [2024-11-18 12:56:38.904369] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:41.314 [2024-11-18 12:56:38.904373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.904380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.904385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.904393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:41.314 [2024-11-18 12:56:38.912358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:41.314 [2024-11-18 12:56:38.912412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.912420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.912427] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:41.314 [2024-11-18 12:56:38.912431] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:41.314 [2024-11-18 12:56:38.912434] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:41.314 [2024-11-18 12:56:38.912440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:41.314 [2024-11-18 12:56:38.920358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:41.314 [2024-11-18 12:56:38.920368] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:41.314 [2024-11-18 12:56:38.920379] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.920386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.920395] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:41.314 [2024-11-18 12:56:38.920399] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:41.314 [2024-11-18 12:56:38.920402] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:41.314 [2024-11-18 12:56:38.920408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:41.314 [2024-11-18 12:56:38.928357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:41.314 [2024-11-18 12:56:38.928370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.928377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.928384] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:41.314 [2024-11-18 12:56:38.928388] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:41.314 [2024-11-18 12:56:38.928391] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:41.314 [2024-11-18 12:56:38.928397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:41.314 [2024-11-18 12:56:38.936358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:41.314 [2024-11-18 12:56:38.936367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.936374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.936381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.936386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.936391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.936395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:41.314 [2024-11-18 12:56:38.936400] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:41.314 [2024-11-18 12:56:38.936404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:41.315 [2024-11-18 12:56:38.936409] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:41.315 [2024-11-18 12:56:38.936423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:41.315 [2024-11-18 12:56:38.944356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:41.315 [2024-11-18 12:56:38.944369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:41.315 [2024-11-18 12:56:38.952357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:41.315 [2024-11-18 12:56:38.952371] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:41.315 [2024-11-18 12:56:38.960358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:41.315 [2024-11-18 12:56:38.960370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:41.315 [2024-11-18 12:56:38.968356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:41.315 [2024-11-18 12:56:38.968371] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:41.315 [2024-11-18 12:56:38.968375] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:41.315 [2024-11-18 12:56:38.968378] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:41.315 [2024-11-18 12:56:38.968382] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:41.315 [2024-11-18 12:56:38.968384] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:41.315 [2024-11-18 12:56:38.968390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:41.315 [2024-11-18 12:56:38.968397] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:41.315 [2024-11-18 12:56:38.968401] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:41.315 [2024-11-18 12:56:38.968404] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:41.315 [2024-11-18 12:56:38.968410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:41.315 [2024-11-18 12:56:38.968416] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:41.315 [2024-11-18 12:56:38.968420] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:41.315 [2024-11-18 12:56:38.968423] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:41.315 [2024-11-18 12:56:38.968428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:41.315 [2024-11-18 12:56:38.968437] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:41.315 [2024-11-18 12:56:38.968441] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:41.315 [2024-11-18 12:56:38.968444] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:41.315 [2024-11-18 12:56:38.968449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:41.315 [2024-11-18 12:56:38.976357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:41.315 [2024-11-18 12:56:38.976376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:41.315 [2024-11-18 12:56:38.976386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:41.315 [2024-11-18 12:56:38.976393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:41.315 ===================================================== 00:13:41.315 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:41.315 ===================================================== 00:13:41.315 Controller Capabilities/Features 00:13:41.315 ================================ 00:13:41.315 Vendor ID: 4e58 00:13:41.315 Subsystem Vendor ID: 4e58 00:13:41.315 Serial Number: SPDK2 00:13:41.315 Model Number: SPDK bdev Controller 00:13:41.315 Firmware Version: 25.01 00:13:41.315 Recommended Arb Burst: 6 00:13:41.315 IEEE OUI Identifier: 8d 6b 50 00:13:41.315 Multi-path I/O 00:13:41.315 May have multiple subsystem ports: Yes 00:13:41.315 May have multiple controllers: Yes 00:13:41.315 Associated with SR-IOV VF: No 00:13:41.315 Max Data Transfer Size: 131072 00:13:41.315 Max Number of Namespaces: 32 00:13:41.315 Max Number of I/O Queues: 127 00:13:41.315 NVMe Specification Version (VS): 1.3 00:13:41.315 NVMe Specification Version (Identify): 1.3 00:13:41.315 Maximum Queue Entries: 256 00:13:41.315 Contiguous Queues Required: Yes 00:13:41.315 Arbitration Mechanisms Supported 00:13:41.315 Weighted Round Robin: Not Supported 00:13:41.315 Vendor Specific: Not Supported 00:13:41.315 Reset Timeout: 15000 ms 00:13:41.315 Doorbell Stride: 4 bytes 00:13:41.315 NVM Subsystem Reset: Not Supported 00:13:41.315 Command Sets Supported 00:13:41.315 NVM Command Set: Supported 00:13:41.315 Boot Partition: Not Supported 00:13:41.315 Memory Page Size Minimum: 4096 bytes 00:13:41.315 Memory Page Size Maximum: 4096 bytes 00:13:41.315 Persistent Memory Region: Not Supported 00:13:41.315 Optional Asynchronous Events Supported 00:13:41.315 Namespace Attribute Notices: Supported 00:13:41.315 Firmware Activation Notices: Not Supported 00:13:41.315 ANA Change Notices: Not Supported 00:13:41.315 PLE Aggregate Log Change Notices: Not Supported 00:13:41.315 LBA Status Info Alert Notices: Not Supported 00:13:41.315 EGE Aggregate Log Change Notices: Not Supported 00:13:41.315 Normal NVM Subsystem Shutdown event: Not Supported 00:13:41.315 Zone Descriptor Change Notices: Not Supported 00:13:41.315 Discovery Log Change Notices: Not Supported 00:13:41.315 Controller Attributes 00:13:41.315 128-bit Host Identifier: Supported 00:13:41.315 Non-Operational Permissive Mode: Not Supported 00:13:41.315 NVM Sets: Not Supported 00:13:41.315 Read Recovery Levels: Not Supported 00:13:41.315 Endurance Groups: Not Supported 00:13:41.315 Predictable Latency Mode: Not Supported 00:13:41.315 Traffic Based Keep ALive: Not Supported 00:13:41.315 Namespace Granularity: Not Supported 00:13:41.315 SQ Associations: Not Supported 00:13:41.315 UUID List: Not Supported 00:13:41.315 Multi-Domain Subsystem: Not Supported 00:13:41.315 Fixed Capacity Management: Not Supported 00:13:41.315 Variable Capacity Management: Not Supported 00:13:41.315 Delete Endurance Group: Not Supported 00:13:41.315 Delete NVM Set: Not Supported 00:13:41.315 Extended LBA Formats Supported: Not Supported 00:13:41.315 Flexible Data Placement Supported: Not Supported 00:13:41.315 00:13:41.315 Controller Memory Buffer Support 00:13:41.315 ================================ 00:13:41.315 Supported: No 00:13:41.315 00:13:41.315 Persistent Memory Region Support 00:13:41.315 ================================ 00:13:41.315 Supported: No 00:13:41.315 00:13:41.315 Admin Command Set Attributes 00:13:41.315 ============================ 00:13:41.315 Security Send/Receive: Not Supported 00:13:41.315 Format NVM: Not Supported 00:13:41.315 Firmware Activate/Download: Not Supported 00:13:41.315 Namespace Management: Not Supported 00:13:41.315 Device Self-Test: Not Supported 00:13:41.315 Directives: Not Supported 00:13:41.315 NVMe-MI: Not Supported 00:13:41.315 Virtualization Management: Not Supported 00:13:41.315 Doorbell Buffer Config: Not Supported 00:13:41.315 Get LBA Status Capability: Not Supported 00:13:41.315 Command & Feature Lockdown Capability: Not Supported 00:13:41.315 Abort Command Limit: 4 00:13:41.315 Async Event Request Limit: 4 00:13:41.315 Number of Firmware Slots: N/A 00:13:41.315 Firmware Slot 1 Read-Only: N/A 00:13:41.315 Firmware Activation Without Reset: N/A 00:13:41.315 Multiple Update Detection Support: N/A 00:13:41.315 Firmware Update Granularity: No Information Provided 00:13:41.315 Per-Namespace SMART Log: No 00:13:41.315 Asymmetric Namespace Access Log Page: Not Supported 00:13:41.315 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:41.315 Command Effects Log Page: Supported 00:13:41.315 Get Log Page Extended Data: Supported 00:13:41.315 Telemetry Log Pages: Not Supported 00:13:41.315 Persistent Event Log Pages: Not Supported 00:13:41.315 Supported Log Pages Log Page: May Support 00:13:41.315 Commands Supported & Effects Log Page: Not Supported 00:13:41.315 Feature Identifiers & Effects Log Page:May Support 00:13:41.315 NVMe-MI Commands & Effects Log Page: May Support 00:13:41.315 Data Area 4 for Telemetry Log: Not Supported 00:13:41.315 Error Log Page Entries Supported: 128 00:13:41.315 Keep Alive: Supported 00:13:41.315 Keep Alive Granularity: 10000 ms 00:13:41.315 00:13:41.315 NVM Command Set Attributes 00:13:41.315 ========================== 00:13:41.315 Submission Queue Entry Size 00:13:41.315 Max: 64 00:13:41.315 Min: 64 00:13:41.315 Completion Queue Entry Size 00:13:41.315 Max: 16 00:13:41.315 Min: 16 00:13:41.315 Number of Namespaces: 32 00:13:41.316 Compare Command: Supported 00:13:41.316 Write Uncorrectable Command: Not Supported 00:13:41.316 Dataset Management Command: Supported 00:13:41.316 Write Zeroes Command: Supported 00:13:41.316 Set Features Save Field: Not Supported 00:13:41.316 Reservations: Not Supported 00:13:41.316 Timestamp: Not Supported 00:13:41.316 Copy: Supported 00:13:41.316 Volatile Write Cache: Present 00:13:41.316 Atomic Write Unit (Normal): 1 00:13:41.316 Atomic Write Unit (PFail): 1 00:13:41.316 Atomic Compare & Write Unit: 1 00:13:41.316 Fused Compare & Write: Supported 00:13:41.316 Scatter-Gather List 00:13:41.316 SGL Command Set: Supported (Dword aligned) 00:13:41.316 SGL Keyed: Not Supported 00:13:41.316 SGL Bit Bucket Descriptor: Not Supported 00:13:41.316 SGL Metadata Pointer: Not Supported 00:13:41.316 Oversized SGL: Not Supported 00:13:41.316 SGL Metadata Address: Not Supported 00:13:41.316 SGL Offset: Not Supported 00:13:41.316 Transport SGL Data Block: Not Supported 00:13:41.316 Replay Protected Memory Block: Not Supported 00:13:41.316 00:13:41.316 Firmware Slot Information 00:13:41.316 ========================= 00:13:41.316 Active slot: 1 00:13:41.316 Slot 1 Firmware Revision: 25.01 00:13:41.316 00:13:41.316 00:13:41.316 Commands Supported and Effects 00:13:41.316 ============================== 00:13:41.316 Admin Commands 00:13:41.316 -------------- 00:13:41.316 Get Log Page (02h): Supported 00:13:41.316 Identify (06h): Supported 00:13:41.316 Abort (08h): Supported 00:13:41.316 Set Features (09h): Supported 00:13:41.316 Get Features (0Ah): Supported 00:13:41.316 Asynchronous Event Request (0Ch): Supported 00:13:41.316 Keep Alive (18h): Supported 00:13:41.316 I/O Commands 00:13:41.316 ------------ 00:13:41.316 Flush (00h): Supported LBA-Change 00:13:41.316 Write (01h): Supported LBA-Change 00:13:41.316 Read (02h): Supported 00:13:41.316 Compare (05h): Supported 00:13:41.316 Write Zeroes (08h): Supported LBA-Change 00:13:41.316 Dataset Management (09h): Supported LBA-Change 00:13:41.316 Copy (19h): Supported LBA-Change 00:13:41.316 00:13:41.316 Error Log 00:13:41.316 ========= 00:13:41.316 00:13:41.316 Arbitration 00:13:41.316 =========== 00:13:41.316 Arbitration Burst: 1 00:13:41.316 00:13:41.316 Power Management 00:13:41.316 ================ 00:13:41.316 Number of Power States: 1 00:13:41.316 Current Power State: Power State #0 00:13:41.316 Power State #0: 00:13:41.316 Max Power: 0.00 W 00:13:41.316 Non-Operational State: Operational 00:13:41.316 Entry Latency: Not Reported 00:13:41.316 Exit Latency: Not Reported 00:13:41.316 Relative Read Throughput: 0 00:13:41.316 Relative Read Latency: 0 00:13:41.316 Relative Write Throughput: 0 00:13:41.316 Relative Write Latency: 0 00:13:41.316 Idle Power: Not Reported 00:13:41.316 Active Power: Not Reported 00:13:41.316 Non-Operational Permissive Mode: Not Supported 00:13:41.316 00:13:41.316 Health Information 00:13:41.316 ================== 00:13:41.316 Critical Warnings: 00:13:41.316 Available Spare Space: OK 00:13:41.316 Temperature: OK 00:13:41.316 Device Reliability: OK 00:13:41.316 Read Only: No 00:13:41.316 Volatile Memory Backup: OK 00:13:41.316 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:41.316 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:41.316 Available Spare: 0% 00:13:41.316 Available Sp[2024-11-18 12:56:38.976479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:41.316 [2024-11-18 12:56:38.984358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:41.316 [2024-11-18 12:56:38.984384] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:41.316 [2024-11-18 12:56:38.984395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.316 [2024-11-18 12:56:38.984401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.316 [2024-11-18 12:56:38.984407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.316 [2024-11-18 12:56:38.984412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.316 [2024-11-18 12:56:38.984468] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:41.316 [2024-11-18 12:56:38.984479] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:41.316 [2024-11-18 12:56:38.985476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:41.316 [2024-11-18 12:56:38.985518] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:41.316 [2024-11-18 12:56:38.985525] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:41.316 [2024-11-18 12:56:38.986478] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:41.316 [2024-11-18 12:56:38.986489] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:41.316 [2024-11-18 12:56:38.986535] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:41.316 [2024-11-18 12:56:38.987514] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:41.577 are Threshold: 0% 00:13:41.577 Life Percentage Used: 0% 00:13:41.577 Data Units Read: 0 00:13:41.577 Data Units Written: 0 00:13:41.577 Host Read Commands: 0 00:13:41.577 Host Write Commands: 0 00:13:41.577 Controller Busy Time: 0 minutes 00:13:41.577 Power Cycles: 0 00:13:41.577 Power On Hours: 0 hours 00:13:41.577 Unsafe Shutdowns: 0 00:13:41.577 Unrecoverable Media Errors: 0 00:13:41.577 Lifetime Error Log Entries: 0 00:13:41.577 Warning Temperature Time: 0 minutes 00:13:41.577 Critical Temperature Time: 0 minutes 00:13:41.577 00:13:41.577 Number of Queues 00:13:41.577 ================ 00:13:41.577 Number of I/O Submission Queues: 127 00:13:41.577 Number of I/O Completion Queues: 127 00:13:41.577 00:13:41.577 Active Namespaces 00:13:41.577 ================= 00:13:41.577 Namespace ID:1 00:13:41.577 Error Recovery Timeout: Unlimited 00:13:41.577 Command Set Identifier: NVM (00h) 00:13:41.577 Deallocate: Supported 00:13:41.577 Deallocated/Unwritten Error: Not Supported 00:13:41.577 Deallocated Read Value: Unknown 00:13:41.577 Deallocate in Write Zeroes: Not Supported 00:13:41.577 Deallocated Guard Field: 0xFFFF 00:13:41.577 Flush: Supported 00:13:41.577 Reservation: Supported 00:13:41.577 Namespace Sharing Capabilities: Multiple Controllers 00:13:41.577 Size (in LBAs): 131072 (0GiB) 00:13:41.577 Capacity (in LBAs): 131072 (0GiB) 00:13:41.577 Utilization (in LBAs): 131072 (0GiB) 00:13:41.577 NGUID: CC698650B92E4C38BF4F113005FE986C 00:13:41.577 UUID: cc698650-b92e-4c38-bf4f-113005fe986c 00:13:41.577 Thin Provisioning: Not Supported 00:13:41.577 Per-NS Atomic Units: Yes 00:13:41.577 Atomic Boundary Size (Normal): 0 00:13:41.577 Atomic Boundary Size (PFail): 0 00:13:41.577 Atomic Boundary Offset: 0 00:13:41.577 Maximum Single Source Range Length: 65535 00:13:41.577 Maximum Copy Length: 65535 00:13:41.577 Maximum Source Range Count: 1 00:13:41.577 NGUID/EUI64 Never Reused: No 00:13:41.577 Namespace Write Protected: No 00:13:41.577 Number of LBA Formats: 1 00:13:41.577 Current LBA Format: LBA Format #00 00:13:41.577 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:41.577 00:13:41.577 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:41.577 [2024-11-18 12:56:39.215908] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:46.858 Initializing NVMe Controllers 00:13:46.858 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:46.858 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:46.858 Initialization complete. Launching workers. 00:13:46.858 ======================================================== 00:13:46.858 Latency(us) 00:13:46.858 Device Information : IOPS MiB/s Average min max 00:13:46.858 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39894.48 155.84 3208.06 982.30 10342.89 00:13:46.858 ======================================================== 00:13:46.858 Total : 39894.48 155.84 3208.06 982.30 10342.89 00:13:46.858 00:13:46.858 [2024-11-18 12:56:44.324616] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:46.858 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:47.119 [2024-11-18 12:56:44.563320] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:52.402 Initializing NVMe Controllers 00:13:52.402 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:52.402 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:52.402 Initialization complete. Launching workers. 00:13:52.402 ======================================================== 00:13:52.402 Latency(us) 00:13:52.402 Device Information : IOPS MiB/s Average min max 00:13:52.402 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39909.87 155.90 3207.04 991.63 10197.02 00:13:52.402 ======================================================== 00:13:52.402 Total : 39909.87 155.90 3207.04 991.63 10197.02 00:13:52.402 00:13:52.402 [2024-11-18 12:56:49.581147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:52.402 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:52.402 [2024-11-18 12:56:49.794711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:57.686 [2024-11-18 12:56:54.923455] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:57.686 Initializing NVMe Controllers 00:13:57.686 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:57.686 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:57.686 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:57.686 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:57.686 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:57.686 Initialization complete. Launching workers. 00:13:57.686 Starting thread on core 2 00:13:57.686 Starting thread on core 3 00:13:57.686 Starting thread on core 1 00:13:57.686 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:57.686 [2024-11-18 12:56:55.217825] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:00.980 [2024-11-18 12:56:58.292873] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:00.980 Initializing NVMe Controllers 00:14:00.980 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:00.980 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:00.980 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:00.980 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:00.980 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:00.980 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:00.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:00.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:00.980 Initialization complete. Launching workers. 00:14:00.980 Starting thread on core 1 with urgent priority queue 00:14:00.980 Starting thread on core 2 with urgent priority queue 00:14:00.980 Starting thread on core 3 with urgent priority queue 00:14:00.980 Starting thread on core 0 with urgent priority queue 00:14:00.980 SPDK bdev Controller (SPDK2 ) core 0: 8518.00 IO/s 11.74 secs/100000 ios 00:14:00.980 SPDK bdev Controller (SPDK2 ) core 1: 7291.33 IO/s 13.71 secs/100000 ios 00:14:00.980 SPDK bdev Controller (SPDK2 ) core 2: 7580.67 IO/s 13.19 secs/100000 ios 00:14:00.980 SPDK bdev Controller (SPDK2 ) core 3: 8418.00 IO/s 11.88 secs/100000 ios 00:14:00.980 ======================================================== 00:14:00.980 00:14:00.980 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:00.980 [2024-11-18 12:56:58.584422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:00.980 Initializing NVMe Controllers 00:14:00.980 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:00.980 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:00.980 Namespace ID: 1 size: 0GB 00:14:00.980 Initialization complete. 00:14:00.980 INFO: using host memory buffer for IO 00:14:00.980 Hello world! 00:14:00.981 [2024-11-18 12:56:58.596520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:00.981 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:01.239 [2024-11-18 12:56:58.879212] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:02.618 Initializing NVMe Controllers 00:14:02.618 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:02.618 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:02.618 Initialization complete. Launching workers. 00:14:02.618 submit (in ns) avg, min, max = 5999.1, 3253.0, 4000118.3 00:14:02.618 complete (in ns) avg, min, max = 22816.9, 1769.6, 4996154.8 00:14:02.618 00:14:02.618 Submit histogram 00:14:02.618 ================ 00:14:02.618 Range in us Cumulative Count 00:14:02.618 3.242 - 3.256: 0.0062% ( 1) 00:14:02.618 3.256 - 3.270: 0.0187% ( 2) 00:14:02.618 3.270 - 3.283: 0.0499% ( 5) 00:14:02.618 3.283 - 3.297: 0.1684% ( 19) 00:14:02.618 3.297 - 3.311: 0.3867% ( 35) 00:14:02.618 3.311 - 3.325: 0.7610% ( 60) 00:14:02.618 3.325 - 3.339: 1.5344% ( 124) 00:14:02.618 3.339 - 3.353: 4.4661% ( 470) 00:14:02.618 3.353 - 3.367: 8.8698% ( 706) 00:14:02.618 3.367 - 3.381: 14.3650% ( 881) 00:14:02.618 3.381 - 3.395: 20.5152% ( 986) 00:14:02.618 3.395 - 3.409: 26.5781% ( 972) 00:14:02.618 3.409 - 3.423: 31.9174% ( 856) 00:14:02.618 3.423 - 3.437: 37.2068% ( 848) 00:14:02.618 3.437 - 3.450: 42.7146% ( 883) 00:14:02.619 3.450 - 3.464: 46.9935% ( 686) 00:14:02.619 3.464 - 3.478: 51.1539% ( 667) 00:14:02.619 3.478 - 3.492: 56.2126% ( 811) 00:14:02.619 3.492 - 3.506: 62.4750% ( 1004) 00:14:02.619 3.506 - 3.520: 67.5462% ( 813) 00:14:02.619 3.520 - 3.534: 71.8001% ( 682) 00:14:02.619 3.534 - 3.548: 77.0584% ( 843) 00:14:02.619 3.548 - 3.562: 81.0067% ( 633) 00:14:02.619 3.562 - 3.590: 85.6225% ( 740) 00:14:02.619 3.590 - 3.617: 87.2505% ( 261) 00:14:02.619 3.617 - 3.645: 88.1612% ( 146) 00:14:02.619 3.645 - 3.673: 89.5334% ( 220) 00:14:02.619 3.673 - 3.701: 91.3049% ( 284) 00:14:02.619 3.701 - 3.729: 92.9079% ( 257) 00:14:02.619 3.729 - 3.757: 94.6794% ( 284) 00:14:02.619 3.757 - 3.784: 96.4197% ( 279) 00:14:02.619 3.784 - 3.812: 97.6110% ( 191) 00:14:02.619 3.812 - 3.840: 98.4344% ( 132) 00:14:02.619 3.840 - 3.868: 98.9833% ( 88) 00:14:02.619 3.868 - 3.896: 99.3700% ( 62) 00:14:02.619 3.896 - 3.923: 99.5259% ( 25) 00:14:02.619 3.923 - 3.951: 99.5634% ( 6) 00:14:02.619 3.951 - 3.979: 99.5883% ( 4) 00:14:02.619 5.343 - 5.370: 99.5946% ( 1) 00:14:02.619 5.482 - 5.510: 99.6008% ( 1) 00:14:02.619 5.677 - 5.704: 99.6133% ( 2) 00:14:02.619 5.704 - 5.732: 99.6257% ( 2) 00:14:02.619 5.816 - 5.843: 99.6320% ( 1) 00:14:02.619 5.927 - 5.955: 99.6382% ( 1) 00:14:02.619 5.983 - 6.010: 99.6445% ( 1) 00:14:02.619 6.094 - 6.122: 99.6507% ( 1) 00:14:02.619 6.177 - 6.205: 99.6569% ( 1) 00:14:02.619 6.261 - 6.289: 99.6632% ( 1) 00:14:02.619 6.400 - 6.428: 99.6694% ( 1) 00:14:02.619 6.483 - 6.511: 99.6756% ( 1) 00:14:02.619 6.678 - 6.706: 99.6819% ( 1) 00:14:02.619 6.706 - 6.734: 99.6881% ( 1) 00:14:02.619 6.817 - 6.845: 99.6944% ( 1) 00:14:02.619 7.123 - 7.179: 99.7068% ( 2) 00:14:02.619 7.235 - 7.290: 99.7131% ( 1) 00:14:02.619 7.290 - 7.346: 99.7193% ( 1) 00:14:02.619 7.346 - 7.402: 99.7255% ( 1) 00:14:02.619 7.402 - 7.457: 99.7380% ( 2) 00:14:02.619 7.513 - 7.569: 99.7443% ( 1) 00:14:02.619 7.569 - 7.624: 99.7505% ( 1) 00:14:02.619 7.736 - 7.791: 99.7692% ( 3) 00:14:02.619 7.847 - 7.903: 99.7817% ( 2) 00:14:02.619 7.903 - 7.958: 99.7879% ( 1) 00:14:02.619 7.958 - 8.014: 99.8004% ( 2) 00:14:02.619 8.125 - 8.181: 99.8191% ( 3) 00:14:02.619 8.348 - 8.403: 99.8253% ( 1) 00:14:02.619 8.403 - 8.459: 99.8316% ( 1) 00:14:02.619 8.570 - 8.626: 99.8378% ( 1) 00:14:02.619 8.737 - 8.793: 99.8503% ( 2) 00:14:02.619 8.793 - 8.849: 99.8565% ( 1) 00:14:02.619 8.960 - 9.016: 99.8628% ( 1) 00:14:02.619 9.127 - 9.183: 99.8690% ( 1) 00:14:02.619 9.238 - 9.294: 99.8752% ( 1) 00:14:02.619 9.350 - 9.405: 99.8815% ( 1) 00:14:02.619 9.405 - 9.461: 99.8877% ( 1) 00:14:02.619 9.517 - 9.572: 99.8940% ( 1) 00:14:02.619 9.628 - 9.683: 99.9002% ( 1) 00:14:02.619 9.683 - 9.739: 99.9064% ( 1) 00:14:02.619 9.906 - 9.962: 99.9127% ( 1) 00:14:02.619 [2024-11-18 12:56:59.979418] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:02.619 10.296 - 10.351: 99.9189% ( 1) 00:14:02.619 10.407 - 10.463: 99.9251% ( 1) 00:14:02.619 13.523 - 13.579: 99.9314% ( 1) 00:14:02.619 13.913 - 13.969: 99.9376% ( 1) 00:14:02.619 3989.148 - 4017.642: 100.0000% ( 10) 00:14:02.619 00:14:02.619 Complete histogram 00:14:02.619 ================== 00:14:02.619 Range in us Cumulative Count 00:14:02.619 1.767 - 1.774: 0.0125% ( 2) 00:14:02.619 1.774 - 1.781: 0.0250% ( 2) 00:14:02.619 1.781 - 1.795: 0.0437% ( 3) 00:14:02.619 1.795 - 1.809: 0.0499% ( 1) 00:14:02.619 1.809 - 1.823: 0.7173% ( 107) 00:14:02.619 1.823 - 1.837: 3.7675% ( 489) 00:14:02.619 1.837 - 1.850: 5.5951% ( 293) 00:14:02.619 1.850 - 1.864: 6.6866% ( 175) 00:14:02.619 1.864 - 1.878: 25.7360% ( 3054) 00:14:02.619 1.878 - 1.892: 79.8341% ( 8673) 00:14:02.619 1.892 - 1.906: 93.1824% ( 2140) 00:14:02.619 1.906 - 1.920: 96.5007% ( 532) 00:14:02.619 1.920 - 1.934: 97.3615% ( 138) 00:14:02.619 1.934 - 1.948: 97.8917% ( 85) 00:14:02.619 1.948 - 1.962: 98.4905% ( 96) 00:14:02.619 1.962 - 1.976: 98.9895% ( 80) 00:14:02.619 1.976 - 1.990: 99.2203% ( 37) 00:14:02.619 1.990 - 2.003: 99.2577% ( 6) 00:14:02.619 2.003 - 2.017: 99.2640% ( 1) 00:14:02.619 2.017 - 2.031: 99.2702% ( 1) 00:14:02.619 2.045 - 2.059: 99.2764% ( 1) 00:14:02.619 2.073 - 2.087: 99.2827% ( 1) 00:14:02.619 2.101 - 2.115: 99.2889% ( 1) 00:14:02.619 3.562 - 3.590: 99.2952% ( 1) 00:14:02.619 3.784 - 3.812: 99.3014% ( 1) 00:14:02.619 3.840 - 3.868: 99.3139% ( 2) 00:14:02.619 3.868 - 3.896: 99.3201% ( 1) 00:14:02.619 4.146 - 4.174: 99.3263% ( 1) 00:14:02.619 4.313 - 4.341: 99.3326% ( 1) 00:14:02.619 4.341 - 4.369: 99.3451% ( 2) 00:14:02.619 4.536 - 4.563: 99.3513% ( 1) 00:14:02.619 4.730 - 4.758: 99.3575% ( 1) 00:14:02.619 4.786 - 4.814: 99.3638% ( 1) 00:14:02.619 5.092 - 5.120: 99.3700% ( 1) 00:14:02.619 5.259 - 5.287: 99.3762% ( 1) 00:14:02.619 5.482 - 5.510: 99.3825% ( 1) 00:14:02.619 5.677 - 5.704: 99.3887% ( 1) 00:14:02.619 5.899 - 5.927: 99.3950% ( 1) 00:14:02.619 6.038 - 6.066: 99.4012% ( 1) 00:14:02.619 6.150 - 6.177: 99.4074% ( 1) 00:14:02.619 6.233 - 6.261: 99.4199% ( 2) 00:14:02.619 6.623 - 6.650: 99.4261% ( 1) 00:14:02.619 6.706 - 6.734: 99.4386% ( 2) 00:14:02.619 7.235 - 7.290: 99.4449% ( 1) 00:14:02.619 8.348 - 8.403: 99.4511% ( 1) 00:14:02.619 13.523 - 13.579: 99.4573% ( 1) 00:14:02.619 21.370 - 21.482: 99.4636% ( 1) 00:14:02.619 39.847 - 40.070: 99.4698% ( 1) 00:14:02.619 43.854 - 44.077: 99.4760% ( 1) 00:14:02.619 2877.885 - 2892.132: 99.4823% ( 1) 00:14:02.619 3903.666 - 3932.160: 99.4885% ( 1) 00:14:02.619 3989.148 - 4017.642: 99.9875% ( 80) 00:14:02.619 4017.642 - 4046.136: 99.9938% ( 1) 00:14:02.619 4986.435 - 5014.929: 100.0000% ( 1) 00:14:02.619 00:14:02.619 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:02.619 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:02.619 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:02.619 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:02.619 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:02.619 [ 00:14:02.619 { 00:14:02.619 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:02.619 "subtype": "Discovery", 00:14:02.619 "listen_addresses": [], 00:14:02.619 "allow_any_host": true, 00:14:02.619 "hosts": [] 00:14:02.619 }, 00:14:02.619 { 00:14:02.619 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:02.619 "subtype": "NVMe", 00:14:02.619 "listen_addresses": [ 00:14:02.619 { 00:14:02.619 "trtype": "VFIOUSER", 00:14:02.619 "adrfam": "IPv4", 00:14:02.619 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:02.619 "trsvcid": "0" 00:14:02.619 } 00:14:02.619 ], 00:14:02.619 "allow_any_host": true, 00:14:02.619 "hosts": [], 00:14:02.619 "serial_number": "SPDK1", 00:14:02.619 "model_number": "SPDK bdev Controller", 00:14:02.619 "max_namespaces": 32, 00:14:02.619 "min_cntlid": 1, 00:14:02.619 "max_cntlid": 65519, 00:14:02.619 "namespaces": [ 00:14:02.619 { 00:14:02.619 "nsid": 1, 00:14:02.619 "bdev_name": "Malloc1", 00:14:02.619 "name": "Malloc1", 00:14:02.619 "nguid": "EC14D1F6E1044226B715547B553D1400", 00:14:02.619 "uuid": "ec14d1f6-e104-4226-b715-547b553d1400" 00:14:02.619 }, 00:14:02.619 { 00:14:02.619 "nsid": 2, 00:14:02.619 "bdev_name": "Malloc3", 00:14:02.619 "name": "Malloc3", 00:14:02.619 "nguid": "165AC7F12A3F437094AD76B24D3FA86E", 00:14:02.619 "uuid": "165ac7f1-2a3f-4370-94ad-76b24d3fa86e" 00:14:02.619 } 00:14:02.619 ] 00:14:02.619 }, 00:14:02.619 { 00:14:02.619 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:02.619 "subtype": "NVMe", 00:14:02.619 "listen_addresses": [ 00:14:02.619 { 00:14:02.619 "trtype": "VFIOUSER", 00:14:02.619 "adrfam": "IPv4", 00:14:02.619 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:02.619 "trsvcid": "0" 00:14:02.619 } 00:14:02.619 ], 00:14:02.619 "allow_any_host": true, 00:14:02.619 "hosts": [], 00:14:02.619 "serial_number": "SPDK2", 00:14:02.619 "model_number": "SPDK bdev Controller", 00:14:02.619 "max_namespaces": 32, 00:14:02.620 "min_cntlid": 1, 00:14:02.620 "max_cntlid": 65519, 00:14:02.620 "namespaces": [ 00:14:02.620 { 00:14:02.620 "nsid": 1, 00:14:02.620 "bdev_name": "Malloc2", 00:14:02.620 "name": "Malloc2", 00:14:02.620 "nguid": "CC698650B92E4C38BF4F113005FE986C", 00:14:02.620 "uuid": "cc698650-b92e-4c38-bf4f-113005fe986c" 00:14:02.620 } 00:14:02.620 ] 00:14:02.620 } 00:14:02.620 ] 00:14:02.620 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:02.620 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2292599 00:14:02.620 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:02.620 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:02.620 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:14:02.620 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:02.620 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:02.620 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:14:02.620 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:02.620 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:02.884 [2024-11-18 12:57:00.397798] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:02.884 Malloc4 00:14:02.884 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:03.145 [2024-11-18 12:57:00.648502] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:03.145 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:03.145 Asynchronous Event Request test 00:14:03.145 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:03.145 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:03.145 Registering asynchronous event callbacks... 00:14:03.145 Starting namespace attribute notice tests for all controllers... 00:14:03.145 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:03.145 aer_cb - Changed Namespace 00:14:03.145 Cleaning up... 00:14:03.405 [ 00:14:03.405 { 00:14:03.405 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:03.405 "subtype": "Discovery", 00:14:03.405 "listen_addresses": [], 00:14:03.405 "allow_any_host": true, 00:14:03.405 "hosts": [] 00:14:03.405 }, 00:14:03.405 { 00:14:03.405 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:03.405 "subtype": "NVMe", 00:14:03.405 "listen_addresses": [ 00:14:03.405 { 00:14:03.405 "trtype": "VFIOUSER", 00:14:03.405 "adrfam": "IPv4", 00:14:03.405 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:03.405 "trsvcid": "0" 00:14:03.405 } 00:14:03.405 ], 00:14:03.405 "allow_any_host": true, 00:14:03.405 "hosts": [], 00:14:03.405 "serial_number": "SPDK1", 00:14:03.405 "model_number": "SPDK bdev Controller", 00:14:03.405 "max_namespaces": 32, 00:14:03.405 "min_cntlid": 1, 00:14:03.405 "max_cntlid": 65519, 00:14:03.405 "namespaces": [ 00:14:03.405 { 00:14:03.405 "nsid": 1, 00:14:03.405 "bdev_name": "Malloc1", 00:14:03.405 "name": "Malloc1", 00:14:03.405 "nguid": "EC14D1F6E1044226B715547B553D1400", 00:14:03.405 "uuid": "ec14d1f6-e104-4226-b715-547b553d1400" 00:14:03.405 }, 00:14:03.405 { 00:14:03.405 "nsid": 2, 00:14:03.405 "bdev_name": "Malloc3", 00:14:03.405 "name": "Malloc3", 00:14:03.405 "nguid": "165AC7F12A3F437094AD76B24D3FA86E", 00:14:03.405 "uuid": "165ac7f1-2a3f-4370-94ad-76b24d3fa86e" 00:14:03.405 } 00:14:03.405 ] 00:14:03.405 }, 00:14:03.405 { 00:14:03.405 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:03.405 "subtype": "NVMe", 00:14:03.405 "listen_addresses": [ 00:14:03.405 { 00:14:03.405 "trtype": "VFIOUSER", 00:14:03.405 "adrfam": "IPv4", 00:14:03.405 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:03.405 "trsvcid": "0" 00:14:03.405 } 00:14:03.405 ], 00:14:03.405 "allow_any_host": true, 00:14:03.405 "hosts": [], 00:14:03.405 "serial_number": "SPDK2", 00:14:03.405 "model_number": "SPDK bdev Controller", 00:14:03.405 "max_namespaces": 32, 00:14:03.405 "min_cntlid": 1, 00:14:03.405 "max_cntlid": 65519, 00:14:03.405 "namespaces": [ 00:14:03.405 { 00:14:03.405 "nsid": 1, 00:14:03.405 "bdev_name": "Malloc2", 00:14:03.405 "name": "Malloc2", 00:14:03.405 "nguid": "CC698650B92E4C38BF4F113005FE986C", 00:14:03.405 "uuid": "cc698650-b92e-4c38-bf4f-113005fe986c" 00:14:03.405 }, 00:14:03.405 { 00:14:03.405 "nsid": 2, 00:14:03.405 "bdev_name": "Malloc4", 00:14:03.405 "name": "Malloc4", 00:14:03.405 "nguid": "235E0875A96B4DE78AC5192089C7A9C7", 00:14:03.405 "uuid": "235e0875-a96b-4de7-8ac5-192089c7a9c7" 00:14:03.405 } 00:14:03.405 ] 00:14:03.405 } 00:14:03.405 ] 00:14:03.405 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2292599 00:14:03.405 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:03.405 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2284957 00:14:03.405 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2284957 ']' 00:14:03.405 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2284957 00:14:03.405 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:14:03.405 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:03.405 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2284957 00:14:03.405 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:03.405 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:03.405 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2284957' 00:14:03.405 killing process with pid 2284957 00:14:03.405 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2284957 00:14:03.405 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2284957 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2292888 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2292888' 00:14:03.665 Process pid: 2292888 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2292888 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2292888 ']' 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.665 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:03.666 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:03.666 [2024-11-18 12:57:01.227184] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:03.666 [2024-11-18 12:57:01.228073] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:14:03.666 [2024-11-18 12:57:01.228114] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.666 [2024-11-18 12:57:01.304656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.666 [2024-11-18 12:57:01.348446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.666 [2024-11-18 12:57:01.348483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.666 [2024-11-18 12:57:01.348491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.666 [2024-11-18 12:57:01.348498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.666 [2024-11-18 12:57:01.348503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.666 [2024-11-18 12:57:01.350097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.666 [2024-11-18 12:57:01.350136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.666 [2024-11-18 12:57:01.350266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.666 [2024-11-18 12:57:01.350266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.926 [2024-11-18 12:57:01.418577] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:03.926 [2024-11-18 12:57:01.419286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:03.926 [2024-11-18 12:57:01.419613] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:03.926 [2024-11-18 12:57:01.420000] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:03.926 [2024-11-18 12:57:01.420051] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:03.926 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:03.926 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:03.926 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:04.867 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:05.127 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:05.127 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:05.127 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:05.127 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:05.127 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:05.387 Malloc1 00:14:05.387 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:05.647 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:05.647 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:05.907 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:05.907 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:05.907 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:06.168 Malloc2 00:14:06.168 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:06.428 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:06.688 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:06.688 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:06.688 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2292888 00:14:06.688 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2292888 ']' 00:14:06.688 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2292888 00:14:06.688 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:14:06.688 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:06.688 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2292888 00:14:06.949 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:06.949 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:06.949 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2292888' 00:14:06.949 killing process with pid 2292888 00:14:06.949 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2292888 00:14:06.949 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2292888 00:14:06.949 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:06.949 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:06.949 00:14:06.949 real 0m51.014s 00:14:06.949 user 3m17.308s 00:14:06.949 sys 0m3.293s 00:14:06.949 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:06.949 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:06.949 ************************************ 00:14:06.949 END TEST nvmf_vfio_user 00:14:06.949 ************************************ 00:14:07.210 12:57:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:07.210 12:57:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:07.210 12:57:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:07.210 12:57:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:07.210 ************************************ 00:14:07.210 START TEST nvmf_vfio_user_nvme_compliance 00:14:07.210 ************************************ 00:14:07.210 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:07.210 * Looking for test storage... 00:14:07.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:07.210 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:07.210 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:14:07.210 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:07.210 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:07.210 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.210 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.210 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:07.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.211 --rc genhtml_branch_coverage=1 00:14:07.211 --rc genhtml_function_coverage=1 00:14:07.211 --rc genhtml_legend=1 00:14:07.211 --rc geninfo_all_blocks=1 00:14:07.211 --rc geninfo_unexecuted_blocks=1 00:14:07.211 00:14:07.211 ' 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:07.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.211 --rc genhtml_branch_coverage=1 00:14:07.211 --rc genhtml_function_coverage=1 00:14:07.211 --rc genhtml_legend=1 00:14:07.211 --rc geninfo_all_blocks=1 00:14:07.211 --rc geninfo_unexecuted_blocks=1 00:14:07.211 00:14:07.211 ' 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:07.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.211 --rc genhtml_branch_coverage=1 00:14:07.211 --rc genhtml_function_coverage=1 00:14:07.211 --rc genhtml_legend=1 00:14:07.211 --rc geninfo_all_blocks=1 00:14:07.211 --rc geninfo_unexecuted_blocks=1 00:14:07.211 00:14:07.211 ' 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:07.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.211 --rc genhtml_branch_coverage=1 00:14:07.211 --rc genhtml_function_coverage=1 00:14:07.211 --rc genhtml_legend=1 00:14:07.211 --rc geninfo_all_blocks=1 00:14:07.211 --rc geninfo_unexecuted_blocks=1 00:14:07.211 00:14:07.211 ' 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:07.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.211 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.212 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:07.212 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:07.212 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:07.473 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2293538 00:14:07.473 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2293538' 00:14:07.473 Process pid: 2293538 00:14:07.473 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:07.473 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:07.473 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2293538 00:14:07.473 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 2293538 ']' 00:14:07.473 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.473 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:07.473 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.473 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:07.473 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:07.473 [2024-11-18 12:57:04.957492] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:14:07.473 [2024-11-18 12:57:04.957540] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.473 [2024-11-18 12:57:05.034880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:07.473 [2024-11-18 12:57:05.077298] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.473 [2024-11-18 12:57:05.077339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.473 [2024-11-18 12:57:05.077346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.473 [2024-11-18 12:57:05.077357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.473 [2024-11-18 12:57:05.077363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.473 [2024-11-18 12:57:05.082369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.473 [2024-11-18 12:57:05.082409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.473 [2024-11-18 12:57:05.082409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.734 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:07.734 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:14:07.734 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:08.676 malloc0 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.676 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:08.936 00:14:08.936 00:14:08.936 CUnit - A unit testing framework for C - Version 2.1-3 00:14:08.936 http://cunit.sourceforge.net/ 00:14:08.936 00:14:08.936 00:14:08.936 Suite: nvme_compliance 00:14:08.936 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-18 12:57:06.432807] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.936 [2024-11-18 12:57:06.434147] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:08.936 [2024-11-18 12:57:06.434162] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:08.936 [2024-11-18 12:57:06.434168] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:08.936 [2024-11-18 12:57:06.435837] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.936 passed 00:14:08.936 Test: admin_identify_ctrlr_verify_fused ...[2024-11-18 12:57:06.514404] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.936 [2024-11-18 12:57:06.520442] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.936 passed 00:14:08.936 Test: admin_identify_ns ...[2024-11-18 12:57:06.596720] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.195 [2024-11-18 12:57:06.663366] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:09.195 [2024-11-18 12:57:06.671361] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:09.195 [2024-11-18 12:57:06.692455] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.195 passed 00:14:09.195 Test: admin_get_features_mandatory_features ...[2024-11-18 12:57:06.770320] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.195 [2024-11-18 12:57:06.773339] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.195 passed 00:14:09.195 Test: admin_get_features_optional_features ...[2024-11-18 12:57:06.850822] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.195 [2024-11-18 12:57:06.853847] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.195 passed 00:14:09.454 Test: admin_set_features_number_of_queues ...[2024-11-18 12:57:06.932718] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.454 [2024-11-18 12:57:07.037448] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.454 passed 00:14:09.454 Test: admin_get_log_page_mandatory_logs ...[2024-11-18 12:57:07.114418] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.454 [2024-11-18 12:57:07.117441] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.454 passed 00:14:09.714 Test: admin_get_log_page_with_lpo ...[2024-11-18 12:57:07.192178] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.714 [2024-11-18 12:57:07.259373] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:09.714 [2024-11-18 12:57:07.272402] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.714 passed 00:14:09.714 Test: fabric_property_get ...[2024-11-18 12:57:07.348109] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.714 [2024-11-18 12:57:07.349345] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:09.714 [2024-11-18 12:57:07.351131] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.714 passed 00:14:09.975 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-18 12:57:07.429621] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.975 [2024-11-18 12:57:07.430858] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:09.975 [2024-11-18 12:57:07.432638] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.975 passed 00:14:09.975 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-18 12:57:07.508714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.975 [2024-11-18 12:57:07.596356] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:09.975 [2024-11-18 12:57:07.612358] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:09.975 [2024-11-18 12:57:07.617438] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.975 passed 00:14:10.235 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-18 12:57:07.693332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:10.235 [2024-11-18 12:57:07.694566] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:10.235 [2024-11-18 12:57:07.696348] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:10.235 passed 00:14:10.235 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-18 12:57:07.771765] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:10.235 [2024-11-18 12:57:07.847373] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:10.235 [2024-11-18 12:57:07.871362] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:10.235 [2024-11-18 12:57:07.879470] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:10.235 passed 00:14:10.495 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-18 12:57:07.953651] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:10.495 [2024-11-18 12:57:07.954887] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:10.495 [2024-11-18 12:57:07.954913] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:10.495 [2024-11-18 12:57:07.956666] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:10.495 passed 00:14:10.495 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-18 12:57:08.034546] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:10.495 [2024-11-18 12:57:08.126366] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:10.495 [2024-11-18 12:57:08.134363] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:10.495 [2024-11-18 12:57:08.142359] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:10.495 [2024-11-18 12:57:08.150370] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:10.495 [2024-11-18 12:57:08.179443] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:10.756 passed 00:14:10.756 Test: admin_create_io_sq_verify_pc ...[2024-11-18 12:57:08.255517] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:10.756 [2024-11-18 12:57:08.273365] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:10.756 [2024-11-18 12:57:08.290737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:10.756 passed 00:14:10.756 Test: admin_create_io_qp_max_qps ...[2024-11-18 12:57:08.369249] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:12.143 [2024-11-18 12:57:09.456364] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:12.143 [2024-11-18 12:57:09.835347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:12.403 passed 00:14:12.404 Test: admin_create_io_sq_shared_cq ...[2024-11-18 12:57:09.913822] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:12.404 [2024-11-18 12:57:10.046357] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:12.404 [2024-11-18 12:57:10.083422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:12.664 passed 00:14:12.664 00:14:12.664 Run Summary: Type Total Ran Passed Failed Inactive 00:14:12.664 suites 1 1 n/a 0 0 00:14:12.664 tests 18 18 18 0 0 00:14:12.664 asserts 360 360 360 0 n/a 00:14:12.664 00:14:12.664 Elapsed time = 1.500 seconds 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2293538 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 2293538 ']' 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 2293538 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2293538 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2293538' 00:14:12.664 killing process with pid 2293538 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 2293538 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 2293538 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:12.664 00:14:12.664 real 0m5.662s 00:14:12.664 user 0m15.774s 00:14:12.664 sys 0m0.535s 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:12.664 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:12.664 ************************************ 00:14:12.665 END TEST nvmf_vfio_user_nvme_compliance 00:14:12.665 ************************************ 00:14:12.925 12:57:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:12.925 12:57:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:12.925 12:57:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:12.925 12:57:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:12.925 ************************************ 00:14:12.925 START TEST nvmf_vfio_user_fuzz 00:14:12.925 ************************************ 00:14:12.925 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:12.925 * Looking for test storage... 00:14:12.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.925 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:12.925 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:14:12.925 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:12.925 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:12.925 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:12.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.926 --rc genhtml_branch_coverage=1 00:14:12.926 --rc genhtml_function_coverage=1 00:14:12.926 --rc genhtml_legend=1 00:14:12.926 --rc geninfo_all_blocks=1 00:14:12.926 --rc geninfo_unexecuted_blocks=1 00:14:12.926 00:14:12.926 ' 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:12.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.926 --rc genhtml_branch_coverage=1 00:14:12.926 --rc genhtml_function_coverage=1 00:14:12.926 --rc genhtml_legend=1 00:14:12.926 --rc geninfo_all_blocks=1 00:14:12.926 --rc geninfo_unexecuted_blocks=1 00:14:12.926 00:14:12.926 ' 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:12.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.926 --rc genhtml_branch_coverage=1 00:14:12.926 --rc genhtml_function_coverage=1 00:14:12.926 --rc genhtml_legend=1 00:14:12.926 --rc geninfo_all_blocks=1 00:14:12.926 --rc geninfo_unexecuted_blocks=1 00:14:12.926 00:14:12.926 ' 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:12.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.926 --rc genhtml_branch_coverage=1 00:14:12.926 --rc genhtml_function_coverage=1 00:14:12.926 --rc genhtml_legend=1 00:14:12.926 --rc geninfo_all_blocks=1 00:14:12.926 --rc geninfo_unexecuted_blocks=1 00:14:12.926 00:14:12.926 ' 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.926 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:13.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2294860 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2294860' 00:14:13.187 Process pid: 2294860 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2294860 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 2294860 ']' 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:13.187 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:13.448 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:13.448 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:14:13.448 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:14.388 malloc0 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:14.388 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:46.478 Fuzzing completed. Shutting down the fuzz application 00:14:46.478 00:14:46.478 Dumping successful admin opcodes: 00:14:46.478 8, 9, 10, 24, 00:14:46.478 Dumping successful io opcodes: 00:14:46.478 0, 00:14:46.478 NS: 0x20000081ef00 I/O qp, Total commands completed: 1021609, total successful commands: 4017, random_seed: 428638848 00:14:46.478 NS: 0x20000081ef00 admin qp, Total commands completed: 253362, total successful commands: 2047, random_seed: 31984192 00:14:46.478 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2294860 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 2294860 ']' 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 2294860 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2294860 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2294860' 00:14:46.479 killing process with pid 2294860 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 2294860 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 2294860 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:46.479 00:14:46.479 real 0m32.233s 00:14:46.479 user 0m30.662s 00:14:46.479 sys 0m30.704s 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:46.479 ************************************ 00:14:46.479 END TEST nvmf_vfio_user_fuzz 00:14:46.479 ************************************ 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:46.479 ************************************ 00:14:46.479 START TEST nvmf_auth_target 00:14:46.479 ************************************ 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:46.479 * Looking for test storage... 00:14:46.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:46.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.479 --rc genhtml_branch_coverage=1 00:14:46.479 --rc genhtml_function_coverage=1 00:14:46.479 --rc genhtml_legend=1 00:14:46.479 --rc geninfo_all_blocks=1 00:14:46.479 --rc geninfo_unexecuted_blocks=1 00:14:46.479 00:14:46.479 ' 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:46.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.479 --rc genhtml_branch_coverage=1 00:14:46.479 --rc genhtml_function_coverage=1 00:14:46.479 --rc genhtml_legend=1 00:14:46.479 --rc geninfo_all_blocks=1 00:14:46.479 --rc geninfo_unexecuted_blocks=1 00:14:46.479 00:14:46.479 ' 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:46.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.479 --rc genhtml_branch_coverage=1 00:14:46.479 --rc genhtml_function_coverage=1 00:14:46.479 --rc genhtml_legend=1 00:14:46.479 --rc geninfo_all_blocks=1 00:14:46.479 --rc geninfo_unexecuted_blocks=1 00:14:46.479 00:14:46.479 ' 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:46.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.479 --rc genhtml_branch_coverage=1 00:14:46.479 --rc genhtml_function_coverage=1 00:14:46.479 --rc genhtml_legend=1 00:14:46.479 --rc geninfo_all_blocks=1 00:14:46.479 --rc geninfo_unexecuted_blocks=1 00:14:46.479 00:14:46.479 ' 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:46.479 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:46.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:46.480 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:51.759 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:51.760 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:51.760 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:51.760 Found net devices under 0000:86:00.0: cvl_0_0 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:51.760 Found net devices under 0000:86:00.1: cvl_0_1 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:51.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:14:51.760 00:14:51.760 --- 10.0.0.2 ping statistics --- 00:14:51.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.760 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:14:51.760 00:14:51.760 --- 10.0.0.1 ping statistics --- 00:14:51.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.760 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2303306 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2303306 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2303306 ']' 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:51.760 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.760 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2303416 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=49fd814f4cd78b2be32ea8bb5a4d1c8cdbfc71aac35d771a 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.IiK 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 49fd814f4cd78b2be32ea8bb5a4d1c8cdbfc71aac35d771a 0 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 49fd814f4cd78b2be32ea8bb5a4d1c8cdbfc71aac35d771a 0 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=49fd814f4cd78b2be32ea8bb5a4d1c8cdbfc71aac35d771a 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.IiK 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.IiK 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.IiK 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=830408f378bed238be47f6793722a019778083d3e753e1d7bd496907ee2c2654 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.pJM 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 830408f378bed238be47f6793722a019778083d3e753e1d7bd496907ee2c2654 3 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 830408f378bed238be47f6793722a019778083d3e753e1d7bd496907ee2c2654 3 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=830408f378bed238be47f6793722a019778083d3e753e1d7bd496907ee2c2654 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.pJM 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.pJM 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.pJM 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e1d3da47e512ec499425be3d1074bfc6 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.WqN 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e1d3da47e512ec499425be3d1074bfc6 1 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e1d3da47e512ec499425be3d1074bfc6 1 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e1d3da47e512ec499425be3d1074bfc6 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.WqN 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.WqN 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.WqN 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3cce454df18ec4102cb2468092c2eab635ab7a8505d1fd29 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.W9L 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3cce454df18ec4102cb2468092c2eab635ab7a8505d1fd29 2 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3cce454df18ec4102cb2468092c2eab635ab7a8505d1fd29 2 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3cce454df18ec4102cb2468092c2eab635ab7a8505d1fd29 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.W9L 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.W9L 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.W9L 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=51345d6e26c78a0ce5c9d93ea3943e89d7aef93d4c8e08b1 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4A0 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 51345d6e26c78a0ce5c9d93ea3943e89d7aef93d4c8e08b1 2 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 51345d6e26c78a0ce5c9d93ea3943e89d7aef93d4c8e08b1 2 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:51.761 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=51345d6e26c78a0ce5c9d93ea3943e89d7aef93d4c8e08b1 00:14:51.762 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:51.762 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:52.021 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4A0 00:14:52.021 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4A0 00:14:52.021 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.4A0 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=258294955d44b574ee4b9a6f7f6f0f9d 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.19E 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 258294955d44b574ee4b9a6f7f6f0f9d 1 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 258294955d44b574ee4b9a6f7f6f0f9d 1 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=258294955d44b574ee4b9a6f7f6f0f9d 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.19E 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.19E 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.19E 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=994b3527100b9eb525266b0f2800a8a21ff012ec356928e6fb99ead9ca314c12 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.lh1 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 994b3527100b9eb525266b0f2800a8a21ff012ec356928e6fb99ead9ca314c12 3 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 994b3527100b9eb525266b0f2800a8a21ff012ec356928e6fb99ead9ca314c12 3 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=994b3527100b9eb525266b0f2800a8a21ff012ec356928e6fb99ead9ca314c12 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.lh1 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.lh1 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.lh1 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2303306 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2303306 ']' 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:52.022 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.282 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:52.282 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:52.282 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2303416 /var/tmp/host.sock 00:14:52.282 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2303416 ']' 00:14:52.282 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:52.282 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:52.282 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:52.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:52.282 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:52.282 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.541 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:52.541 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:52.541 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:52.541 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.541 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.541 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.541 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:52.541 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IiK 00:14:52.541 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.541 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.541 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.541 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.IiK 00:14:52.541 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.IiK 00:14:52.801 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.pJM ]] 00:14:52.801 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pJM 00:14:52.801 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.801 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.801 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.801 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pJM 00:14:52.801 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pJM 00:14:52.801 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:52.801 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.WqN 00:14:52.801 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.801 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.801 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.801 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.WqN 00:14:52.801 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.WqN 00:14:53.060 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.W9L ]] 00:14:53.060 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.W9L 00:14:53.060 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.060 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.060 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.060 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.W9L 00:14:53.060 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.W9L 00:14:53.320 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:53.320 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4A0 00:14:53.320 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.320 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.320 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.320 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.4A0 00:14:53.320 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.4A0 00:14:53.580 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.19E ]] 00:14:53.580 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.19E 00:14:53.580 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.580 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.580 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.580 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.19E 00:14:53.580 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.19E 00:14:53.580 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:53.580 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lh1 00:14:53.580 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.580 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.840 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.840 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.lh1 00:14:53.840 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.lh1 00:14:53.840 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:53.840 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:53.840 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:53.840 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.840 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:53.840 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:54.100 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:54.100 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.100 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:54.100 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:54.100 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:54.100 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.100 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.100 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.100 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.100 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.100 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.100 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.100 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.359 00:14:54.359 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.359 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.359 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.619 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.619 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.619 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.619 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.619 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.619 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.619 { 00:14:54.619 "cntlid": 1, 00:14:54.619 "qid": 0, 00:14:54.619 "state": "enabled", 00:14:54.619 "thread": "nvmf_tgt_poll_group_000", 00:14:54.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:54.619 "listen_address": { 00:14:54.619 "trtype": "TCP", 00:14:54.619 "adrfam": "IPv4", 00:14:54.619 "traddr": "10.0.0.2", 00:14:54.619 "trsvcid": "4420" 00:14:54.619 }, 00:14:54.619 "peer_address": { 00:14:54.619 "trtype": "TCP", 00:14:54.619 "adrfam": "IPv4", 00:14:54.619 "traddr": "10.0.0.1", 00:14:54.619 "trsvcid": "58792" 00:14:54.619 }, 00:14:54.619 "auth": { 00:14:54.619 "state": "completed", 00:14:54.619 "digest": "sha256", 00:14:54.619 "dhgroup": "null" 00:14:54.619 } 00:14:54.619 } 00:14:54.619 ]' 00:14:54.619 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.619 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.619 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.619 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:54.619 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.619 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.619 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.619 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.879 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:14:54.879 12:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:14:55.448 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.448 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.448 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.448 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.449 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.449 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.449 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:55.449 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:55.709 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:55.709 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.709 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:55.709 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:55.709 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:55.709 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.709 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.709 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.709 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.709 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.709 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.709 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.709 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.968 00:14:55.968 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.968 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.968 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.228 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.228 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.228 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.228 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.228 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.228 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.228 { 00:14:56.228 "cntlid": 3, 00:14:56.228 "qid": 0, 00:14:56.228 "state": "enabled", 00:14:56.228 "thread": "nvmf_tgt_poll_group_000", 00:14:56.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:56.228 "listen_address": { 00:14:56.228 "trtype": "TCP", 00:14:56.228 "adrfam": "IPv4", 00:14:56.228 "traddr": "10.0.0.2", 00:14:56.228 "trsvcid": "4420" 00:14:56.228 }, 00:14:56.228 "peer_address": { 00:14:56.228 "trtype": "TCP", 00:14:56.228 "adrfam": "IPv4", 00:14:56.228 "traddr": "10.0.0.1", 00:14:56.228 "trsvcid": "35220" 00:14:56.228 }, 00:14:56.228 "auth": { 00:14:56.228 "state": "completed", 00:14:56.228 "digest": "sha256", 00:14:56.228 "dhgroup": "null" 00:14:56.228 } 00:14:56.228 } 00:14:56.228 ]' 00:14:56.228 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.228 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.228 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.228 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:56.228 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.228 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.228 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.228 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.488 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:14:56.488 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:14:57.057 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.057 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:57.057 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.057 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.057 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.057 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.057 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:57.057 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:57.317 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:57.317 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.317 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:57.317 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:57.317 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:57.317 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.317 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.317 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.317 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.317 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.317 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.317 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.317 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.577 00:14:57.577 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.577 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.577 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.837 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.837 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.837 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.837 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.837 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.837 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.837 { 00:14:57.837 "cntlid": 5, 00:14:57.837 "qid": 0, 00:14:57.837 "state": "enabled", 00:14:57.837 "thread": "nvmf_tgt_poll_group_000", 00:14:57.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:57.837 "listen_address": { 00:14:57.837 "trtype": "TCP", 00:14:57.837 "adrfam": "IPv4", 00:14:57.837 "traddr": "10.0.0.2", 00:14:57.837 "trsvcid": "4420" 00:14:57.837 }, 00:14:57.837 "peer_address": { 00:14:57.837 "trtype": "TCP", 00:14:57.837 "adrfam": "IPv4", 00:14:57.837 "traddr": "10.0.0.1", 00:14:57.837 "trsvcid": "35246" 00:14:57.837 }, 00:14:57.837 "auth": { 00:14:57.837 "state": "completed", 00:14:57.837 "digest": "sha256", 00:14:57.837 "dhgroup": "null" 00:14:57.837 } 00:14:57.837 } 00:14:57.837 ]' 00:14:57.837 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.837 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.837 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.837 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:57.837 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.837 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.837 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.837 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.097 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:14:58.097 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:14:58.666 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.666 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:58.666 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.666 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.666 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.666 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.666 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:58.666 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:58.926 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:58.926 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.926 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:58.926 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:58.926 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:58.926 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.926 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:14:58.926 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.926 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.926 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.926 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:58.926 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.926 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.185 00:14:59.185 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.185 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.185 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.445 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.445 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.445 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.445 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.445 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.445 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.445 { 00:14:59.445 "cntlid": 7, 00:14:59.445 "qid": 0, 00:14:59.445 "state": "enabled", 00:14:59.445 "thread": "nvmf_tgt_poll_group_000", 00:14:59.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:59.445 "listen_address": { 00:14:59.445 "trtype": "TCP", 00:14:59.445 "adrfam": "IPv4", 00:14:59.445 "traddr": "10.0.0.2", 00:14:59.445 "trsvcid": "4420" 00:14:59.445 }, 00:14:59.445 "peer_address": { 00:14:59.445 "trtype": "TCP", 00:14:59.445 "adrfam": "IPv4", 00:14:59.445 "traddr": "10.0.0.1", 00:14:59.445 "trsvcid": "35270" 00:14:59.445 }, 00:14:59.445 "auth": { 00:14:59.445 "state": "completed", 00:14:59.445 "digest": "sha256", 00:14:59.445 "dhgroup": "null" 00:14:59.445 } 00:14:59.445 } 00:14:59.445 ]' 00:14:59.445 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.445 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.445 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.445 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:59.445 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.445 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.445 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.445 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.704 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:14:59.704 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:00.274 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.274 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:00.274 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.274 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.274 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.274 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:00.274 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.274 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:00.274 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:00.533 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:00.534 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.534 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:00.534 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:00.534 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:00.534 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.534 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.534 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.534 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.534 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.534 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.534 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.534 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.793 00:15:00.793 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.793 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.793 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.053 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.053 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.053 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.053 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.053 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.053 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.053 { 00:15:01.053 "cntlid": 9, 00:15:01.053 "qid": 0, 00:15:01.053 "state": "enabled", 00:15:01.053 "thread": "nvmf_tgt_poll_group_000", 00:15:01.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:01.053 "listen_address": { 00:15:01.053 "trtype": "TCP", 00:15:01.053 "adrfam": "IPv4", 00:15:01.053 "traddr": "10.0.0.2", 00:15:01.053 "trsvcid": "4420" 00:15:01.053 }, 00:15:01.053 "peer_address": { 00:15:01.053 "trtype": "TCP", 00:15:01.053 "adrfam": "IPv4", 00:15:01.053 "traddr": "10.0.0.1", 00:15:01.053 "trsvcid": "35292" 00:15:01.053 }, 00:15:01.053 "auth": { 00:15:01.053 "state": "completed", 00:15:01.053 "digest": "sha256", 00:15:01.053 "dhgroup": "ffdhe2048" 00:15:01.053 } 00:15:01.053 } 00:15:01.053 ]' 00:15:01.053 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.053 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.053 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.053 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:01.053 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.053 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.053 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.053 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.312 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:01.312 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:01.882 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.882 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:01.882 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.882 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.882 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.882 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.882 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:01.882 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:02.142 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:02.142 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.142 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:02.142 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:02.142 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:02.142 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.142 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.142 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.142 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.142 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.142 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.142 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.142 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.401 00:15:02.402 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.402 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.402 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.661 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.661 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.661 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.661 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.661 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.661 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.661 { 00:15:02.661 "cntlid": 11, 00:15:02.661 "qid": 0, 00:15:02.661 "state": "enabled", 00:15:02.661 "thread": "nvmf_tgt_poll_group_000", 00:15:02.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:02.661 "listen_address": { 00:15:02.661 "trtype": "TCP", 00:15:02.661 "adrfam": "IPv4", 00:15:02.661 "traddr": "10.0.0.2", 00:15:02.661 "trsvcid": "4420" 00:15:02.661 }, 00:15:02.661 "peer_address": { 00:15:02.661 "trtype": "TCP", 00:15:02.661 "adrfam": "IPv4", 00:15:02.661 "traddr": "10.0.0.1", 00:15:02.661 "trsvcid": "35320" 00:15:02.661 }, 00:15:02.661 "auth": { 00:15:02.661 "state": "completed", 00:15:02.661 "digest": "sha256", 00:15:02.661 "dhgroup": "ffdhe2048" 00:15:02.661 } 00:15:02.661 } 00:15:02.661 ]' 00:15:02.661 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.661 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.662 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.662 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:02.662 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.662 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.662 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.662 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.922 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:02.922 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:03.492 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.492 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:03.492 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.492 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.492 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.492 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.492 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:03.492 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:03.752 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:03.752 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.752 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.752 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:03.752 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:03.752 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.752 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.752 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.752 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.752 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.752 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.752 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.752 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.012 00:15:04.012 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.012 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.012 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.271 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.271 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.271 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.271 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.271 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.271 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.271 { 00:15:04.271 "cntlid": 13, 00:15:04.271 "qid": 0, 00:15:04.271 "state": "enabled", 00:15:04.271 "thread": "nvmf_tgt_poll_group_000", 00:15:04.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:04.271 "listen_address": { 00:15:04.271 "trtype": "TCP", 00:15:04.271 "adrfam": "IPv4", 00:15:04.271 "traddr": "10.0.0.2", 00:15:04.271 "trsvcid": "4420" 00:15:04.271 }, 00:15:04.271 "peer_address": { 00:15:04.271 "trtype": "TCP", 00:15:04.271 "adrfam": "IPv4", 00:15:04.271 "traddr": "10.0.0.1", 00:15:04.271 "trsvcid": "35354" 00:15:04.271 }, 00:15:04.271 "auth": { 00:15:04.271 "state": "completed", 00:15:04.271 "digest": "sha256", 00:15:04.271 "dhgroup": "ffdhe2048" 00:15:04.271 } 00:15:04.271 } 00:15:04.271 ]' 00:15:04.271 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.272 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.272 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.272 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:04.272 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.272 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.272 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.272 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.531 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:04.531 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:05.100 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.100 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:05.100 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.100 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.100 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.100 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.100 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:05.100 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:05.360 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:05.360 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.360 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.360 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:05.360 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:05.360 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.360 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:05.360 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.360 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.360 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.360 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:05.360 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:05.360 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:05.619 00:15:05.619 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.619 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.619 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.879 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.879 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.879 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.879 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.879 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.879 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.879 { 00:15:05.879 "cntlid": 15, 00:15:05.879 "qid": 0, 00:15:05.879 "state": "enabled", 00:15:05.879 "thread": "nvmf_tgt_poll_group_000", 00:15:05.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:05.879 "listen_address": { 00:15:05.879 "trtype": "TCP", 00:15:05.879 "adrfam": "IPv4", 00:15:05.879 "traddr": "10.0.0.2", 00:15:05.879 "trsvcid": "4420" 00:15:05.879 }, 00:15:05.879 "peer_address": { 00:15:05.879 "trtype": "TCP", 00:15:05.879 "adrfam": "IPv4", 00:15:05.879 "traddr": "10.0.0.1", 00:15:05.879 "trsvcid": "35390" 00:15:05.879 }, 00:15:05.879 "auth": { 00:15:05.879 "state": "completed", 00:15:05.879 "digest": "sha256", 00:15:05.879 "dhgroup": "ffdhe2048" 00:15:05.879 } 00:15:05.879 } 00:15:05.879 ]' 00:15:05.879 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.879 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.879 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.879 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:05.879 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.879 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.879 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.879 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.194 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:06.194 12:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:06.829 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.829 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.829 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.829 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.829 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.829 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.829 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.829 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:06.830 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:06.830 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:06.830 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.830 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:06.830 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:06.830 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:06.830 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.830 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.830 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.830 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.830 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.830 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.830 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.830 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.109 00:15:07.109 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.109 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.109 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.400 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.400 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.400 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.400 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.400 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.400 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.400 { 00:15:07.400 "cntlid": 17, 00:15:07.400 "qid": 0, 00:15:07.400 "state": "enabled", 00:15:07.400 "thread": "nvmf_tgt_poll_group_000", 00:15:07.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:07.400 "listen_address": { 00:15:07.400 "trtype": "TCP", 00:15:07.400 "adrfam": "IPv4", 00:15:07.400 "traddr": "10.0.0.2", 00:15:07.400 "trsvcid": "4420" 00:15:07.400 }, 00:15:07.400 "peer_address": { 00:15:07.400 "trtype": "TCP", 00:15:07.400 "adrfam": "IPv4", 00:15:07.400 "traddr": "10.0.0.1", 00:15:07.400 "trsvcid": "49564" 00:15:07.400 }, 00:15:07.400 "auth": { 00:15:07.400 "state": "completed", 00:15:07.400 "digest": "sha256", 00:15:07.400 "dhgroup": "ffdhe3072" 00:15:07.400 } 00:15:07.400 } 00:15:07.400 ]' 00:15:07.400 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.400 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.400 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.400 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:07.713 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.713 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.713 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.713 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.713 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:07.713 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:08.417 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.417 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:08.417 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.417 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.417 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.417 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.417 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:08.417 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:08.676 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:08.676 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.676 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.676 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:08.676 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:08.676 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.676 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.676 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.676 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.676 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.676 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.676 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.676 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.935 00:15:08.935 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.935 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.935 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.935 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.935 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.935 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.935 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.194 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.194 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.194 { 00:15:09.194 "cntlid": 19, 00:15:09.194 "qid": 0, 00:15:09.194 "state": "enabled", 00:15:09.194 "thread": "nvmf_tgt_poll_group_000", 00:15:09.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:09.194 "listen_address": { 00:15:09.194 "trtype": "TCP", 00:15:09.194 "adrfam": "IPv4", 00:15:09.194 "traddr": "10.0.0.2", 00:15:09.194 "trsvcid": "4420" 00:15:09.194 }, 00:15:09.194 "peer_address": { 00:15:09.194 "trtype": "TCP", 00:15:09.194 "adrfam": "IPv4", 00:15:09.194 "traddr": "10.0.0.1", 00:15:09.194 "trsvcid": "49590" 00:15:09.194 }, 00:15:09.194 "auth": { 00:15:09.194 "state": "completed", 00:15:09.194 "digest": "sha256", 00:15:09.194 "dhgroup": "ffdhe3072" 00:15:09.194 } 00:15:09.194 } 00:15:09.194 ]' 00:15:09.194 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.194 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.194 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.194 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:09.194 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.194 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.194 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.194 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.452 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:09.452 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:10.020 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.020 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:10.020 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.020 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.020 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.020 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.020 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:10.020 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:10.280 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:10.280 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.280 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.280 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:10.280 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:10.280 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.280 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.280 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.280 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.280 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.280 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.280 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.280 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.539 00:15:10.539 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.539 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.539 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.798 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.798 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.798 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.798 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.798 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.798 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.798 { 00:15:10.798 "cntlid": 21, 00:15:10.798 "qid": 0, 00:15:10.798 "state": "enabled", 00:15:10.798 "thread": "nvmf_tgt_poll_group_000", 00:15:10.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:10.798 "listen_address": { 00:15:10.798 "trtype": "TCP", 00:15:10.798 "adrfam": "IPv4", 00:15:10.798 "traddr": "10.0.0.2", 00:15:10.798 "trsvcid": "4420" 00:15:10.798 }, 00:15:10.798 "peer_address": { 00:15:10.798 "trtype": "TCP", 00:15:10.798 "adrfam": "IPv4", 00:15:10.798 "traddr": "10.0.0.1", 00:15:10.798 "trsvcid": "49622" 00:15:10.798 }, 00:15:10.798 "auth": { 00:15:10.798 "state": "completed", 00:15:10.798 "digest": "sha256", 00:15:10.798 "dhgroup": "ffdhe3072" 00:15:10.798 } 00:15:10.798 } 00:15:10.798 ]' 00:15:10.798 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.798 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.798 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.798 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:10.798 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.798 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.798 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.798 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.058 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:11.058 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:11.627 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.627 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:11.627 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.627 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.627 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.627 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.627 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:11.627 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:11.887 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:11.887 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.887 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.887 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:11.887 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:11.887 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.887 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:11.887 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.887 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.887 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.887 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:11.887 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.887 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.146 00:15:12.146 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.146 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.146 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.146 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.406 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.406 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.406 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.406 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.406 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.406 { 00:15:12.406 "cntlid": 23, 00:15:12.406 "qid": 0, 00:15:12.406 "state": "enabled", 00:15:12.406 "thread": "nvmf_tgt_poll_group_000", 00:15:12.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:12.406 "listen_address": { 00:15:12.406 "trtype": "TCP", 00:15:12.406 "adrfam": "IPv4", 00:15:12.406 "traddr": "10.0.0.2", 00:15:12.406 "trsvcid": "4420" 00:15:12.406 }, 00:15:12.406 "peer_address": { 00:15:12.406 "trtype": "TCP", 00:15:12.406 "adrfam": "IPv4", 00:15:12.407 "traddr": "10.0.0.1", 00:15:12.407 "trsvcid": "49648" 00:15:12.407 }, 00:15:12.407 "auth": { 00:15:12.407 "state": "completed", 00:15:12.407 "digest": "sha256", 00:15:12.407 "dhgroup": "ffdhe3072" 00:15:12.407 } 00:15:12.407 } 00:15:12.407 ]' 00:15:12.407 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.407 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.407 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.407 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:12.407 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.407 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.407 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.407 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.666 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:12.666 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:13.235 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.235 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:13.235 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.235 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.235 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.235 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.235 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.235 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:13.235 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:13.494 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:13.494 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.494 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.494 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:13.494 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:13.494 12:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.494 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.494 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.494 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.494 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.494 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.494 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.495 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.754 00:15:13.754 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.754 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.754 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.013 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.013 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.013 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.013 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.013 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.013 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.013 { 00:15:14.013 "cntlid": 25, 00:15:14.013 "qid": 0, 00:15:14.013 "state": "enabled", 00:15:14.013 "thread": "nvmf_tgt_poll_group_000", 00:15:14.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:14.013 "listen_address": { 00:15:14.013 "trtype": "TCP", 00:15:14.013 "adrfam": "IPv4", 00:15:14.013 "traddr": "10.0.0.2", 00:15:14.013 "trsvcid": "4420" 00:15:14.013 }, 00:15:14.013 "peer_address": { 00:15:14.013 "trtype": "TCP", 00:15:14.013 "adrfam": "IPv4", 00:15:14.013 "traddr": "10.0.0.1", 00:15:14.013 "trsvcid": "49672" 00:15:14.013 }, 00:15:14.013 "auth": { 00:15:14.013 "state": "completed", 00:15:14.013 "digest": "sha256", 00:15:14.013 "dhgroup": "ffdhe4096" 00:15:14.013 } 00:15:14.013 } 00:15:14.013 ]' 00:15:14.013 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.013 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.013 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.013 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:14.013 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.013 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.013 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.013 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.273 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:14.273 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:14.840 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.840 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:14.840 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.840 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.840 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.840 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.840 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:14.840 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:15.100 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:15.100 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.100 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.100 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:15.100 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:15.100 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.100 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.100 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.100 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.100 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.100 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.100 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.100 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.360 00:15:15.360 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.360 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.360 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.619 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.619 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.619 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.619 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.619 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.619 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.619 { 00:15:15.619 "cntlid": 27, 00:15:15.619 "qid": 0, 00:15:15.619 "state": "enabled", 00:15:15.619 "thread": "nvmf_tgt_poll_group_000", 00:15:15.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:15.619 "listen_address": { 00:15:15.619 "trtype": "TCP", 00:15:15.619 "adrfam": "IPv4", 00:15:15.619 "traddr": "10.0.0.2", 00:15:15.619 "trsvcid": "4420" 00:15:15.619 }, 00:15:15.619 "peer_address": { 00:15:15.619 "trtype": "TCP", 00:15:15.619 "adrfam": "IPv4", 00:15:15.619 "traddr": "10.0.0.1", 00:15:15.619 "trsvcid": "49680" 00:15:15.619 }, 00:15:15.619 "auth": { 00:15:15.619 "state": "completed", 00:15:15.619 "digest": "sha256", 00:15:15.619 "dhgroup": "ffdhe4096" 00:15:15.619 } 00:15:15.619 } 00:15:15.619 ]' 00:15:15.619 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.619 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.619 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.619 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:15.619 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.619 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.619 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.619 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.878 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:15.878 12:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:16.447 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.447 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.447 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.447 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.447 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.447 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.447 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.447 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.706 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:16.706 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.706 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.706 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:16.706 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:16.707 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.707 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.707 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.707 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.707 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.707 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.707 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.707 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.966 00:15:16.966 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.966 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.966 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.225 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.225 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.225 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.225 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.225 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.225 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.225 { 00:15:17.225 "cntlid": 29, 00:15:17.225 "qid": 0, 00:15:17.225 "state": "enabled", 00:15:17.225 "thread": "nvmf_tgt_poll_group_000", 00:15:17.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:17.225 "listen_address": { 00:15:17.225 "trtype": "TCP", 00:15:17.225 "adrfam": "IPv4", 00:15:17.225 "traddr": "10.0.0.2", 00:15:17.225 "trsvcid": "4420" 00:15:17.225 }, 00:15:17.225 "peer_address": { 00:15:17.225 "trtype": "TCP", 00:15:17.225 "adrfam": "IPv4", 00:15:17.225 "traddr": "10.0.0.1", 00:15:17.225 "trsvcid": "45832" 00:15:17.225 }, 00:15:17.225 "auth": { 00:15:17.225 "state": "completed", 00:15:17.225 "digest": "sha256", 00:15:17.225 "dhgroup": "ffdhe4096" 00:15:17.225 } 00:15:17.225 } 00:15:17.225 ]' 00:15:17.225 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.225 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.225 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.225 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:17.225 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.225 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.225 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.225 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.484 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:17.484 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:18.054 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.054 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:18.054 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.054 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.054 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.054 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.054 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:18.054 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:18.313 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:18.313 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.313 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.313 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:18.313 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:18.313 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.313 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:18.313 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.313 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.313 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.313 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:18.313 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.313 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.573 00:15:18.573 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.573 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.573 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.832 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.832 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.832 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.832 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.832 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.832 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.832 { 00:15:18.832 "cntlid": 31, 00:15:18.832 "qid": 0, 00:15:18.832 "state": "enabled", 00:15:18.832 "thread": "nvmf_tgt_poll_group_000", 00:15:18.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:18.832 "listen_address": { 00:15:18.832 "trtype": "TCP", 00:15:18.832 "adrfam": "IPv4", 00:15:18.832 "traddr": "10.0.0.2", 00:15:18.832 "trsvcid": "4420" 00:15:18.832 }, 00:15:18.832 "peer_address": { 00:15:18.832 "trtype": "TCP", 00:15:18.832 "adrfam": "IPv4", 00:15:18.832 "traddr": "10.0.0.1", 00:15:18.832 "trsvcid": "45870" 00:15:18.832 }, 00:15:18.832 "auth": { 00:15:18.832 "state": "completed", 00:15:18.832 "digest": "sha256", 00:15:18.832 "dhgroup": "ffdhe4096" 00:15:18.832 } 00:15:18.832 } 00:15:18.832 ]' 00:15:18.832 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.832 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.832 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.832 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:18.832 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.832 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.832 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.832 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.092 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:19.092 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:19.660 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.660 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:19.660 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.660 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.660 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.660 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.660 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.660 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:19.660 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:19.922 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:19.922 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.922 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.922 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:19.922 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:19.922 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.922 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.922 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.922 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.922 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.922 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.922 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.922 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.490 00:15:20.490 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.490 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.490 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.490 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.490 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.490 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.490 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.490 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.490 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.490 { 00:15:20.490 "cntlid": 33, 00:15:20.490 "qid": 0, 00:15:20.490 "state": "enabled", 00:15:20.490 "thread": "nvmf_tgt_poll_group_000", 00:15:20.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:20.490 "listen_address": { 00:15:20.490 "trtype": "TCP", 00:15:20.490 "adrfam": "IPv4", 00:15:20.490 "traddr": "10.0.0.2", 00:15:20.490 "trsvcid": "4420" 00:15:20.490 }, 00:15:20.490 "peer_address": { 00:15:20.490 "trtype": "TCP", 00:15:20.490 "adrfam": "IPv4", 00:15:20.490 "traddr": "10.0.0.1", 00:15:20.490 "trsvcid": "45898" 00:15:20.490 }, 00:15:20.490 "auth": { 00:15:20.490 "state": "completed", 00:15:20.490 "digest": "sha256", 00:15:20.490 "dhgroup": "ffdhe6144" 00:15:20.490 } 00:15:20.490 } 00:15:20.490 ]' 00:15:20.490 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.490 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.490 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.749 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:20.749 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.749 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.749 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.749 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.008 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:21.009 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.577 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.147 00:15:22.147 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.147 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.147 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.147 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.147 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.147 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.147 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.147 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.147 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.147 { 00:15:22.147 "cntlid": 35, 00:15:22.147 "qid": 0, 00:15:22.147 "state": "enabled", 00:15:22.147 "thread": "nvmf_tgt_poll_group_000", 00:15:22.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:22.147 "listen_address": { 00:15:22.147 "trtype": "TCP", 00:15:22.147 "adrfam": "IPv4", 00:15:22.147 "traddr": "10.0.0.2", 00:15:22.147 "trsvcid": "4420" 00:15:22.147 }, 00:15:22.147 "peer_address": { 00:15:22.147 "trtype": "TCP", 00:15:22.147 "adrfam": "IPv4", 00:15:22.147 "traddr": "10.0.0.1", 00:15:22.147 "trsvcid": "45926" 00:15:22.147 }, 00:15:22.147 "auth": { 00:15:22.147 "state": "completed", 00:15:22.147 "digest": "sha256", 00:15:22.147 "dhgroup": "ffdhe6144" 00:15:22.147 } 00:15:22.147 } 00:15:22.147 ]' 00:15:22.147 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.406 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.406 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.406 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:22.406 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.406 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.406 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.406 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.665 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:22.665 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:23.233 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.233 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.233 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.233 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.233 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.233 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.233 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:23.233 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:23.492 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:23.492 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.492 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.492 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:23.492 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:23.492 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.492 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.492 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.492 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.492 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.492 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.492 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.492 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.751 00:15:23.751 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.751 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.751 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.010 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.010 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.010 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.010 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.010 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.010 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.010 { 00:15:24.010 "cntlid": 37, 00:15:24.010 "qid": 0, 00:15:24.010 "state": "enabled", 00:15:24.010 "thread": "nvmf_tgt_poll_group_000", 00:15:24.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:24.010 "listen_address": { 00:15:24.010 "trtype": "TCP", 00:15:24.010 "adrfam": "IPv4", 00:15:24.010 "traddr": "10.0.0.2", 00:15:24.010 "trsvcid": "4420" 00:15:24.010 }, 00:15:24.010 "peer_address": { 00:15:24.010 "trtype": "TCP", 00:15:24.010 "adrfam": "IPv4", 00:15:24.010 "traddr": "10.0.0.1", 00:15:24.010 "trsvcid": "45946" 00:15:24.010 }, 00:15:24.010 "auth": { 00:15:24.010 "state": "completed", 00:15:24.010 "digest": "sha256", 00:15:24.010 "dhgroup": "ffdhe6144" 00:15:24.010 } 00:15:24.010 } 00:15:24.010 ]' 00:15:24.010 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.010 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.010 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.010 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:24.010 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.010 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.010 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.010 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.269 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:24.269 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:24.834 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.834 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:24.834 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.834 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.834 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.834 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.834 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:24.834 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:25.092 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:25.092 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.092 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.092 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:25.092 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:25.092 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.093 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:25.093 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.093 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.093 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.093 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:25.093 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.093 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.352 00:15:25.352 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.352 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.352 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.611 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.611 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.611 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.611 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.611 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.611 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.611 { 00:15:25.611 "cntlid": 39, 00:15:25.611 "qid": 0, 00:15:25.611 "state": "enabled", 00:15:25.611 "thread": "nvmf_tgt_poll_group_000", 00:15:25.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:25.611 "listen_address": { 00:15:25.611 "trtype": "TCP", 00:15:25.611 "adrfam": "IPv4", 00:15:25.611 "traddr": "10.0.0.2", 00:15:25.611 "trsvcid": "4420" 00:15:25.611 }, 00:15:25.611 "peer_address": { 00:15:25.611 "trtype": "TCP", 00:15:25.611 "adrfam": "IPv4", 00:15:25.611 "traddr": "10.0.0.1", 00:15:25.611 "trsvcid": "45974" 00:15:25.611 }, 00:15:25.611 "auth": { 00:15:25.611 "state": "completed", 00:15:25.611 "digest": "sha256", 00:15:25.611 "dhgroup": "ffdhe6144" 00:15:25.611 } 00:15:25.611 } 00:15:25.611 ]' 00:15:25.611 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.611 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.611 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.611 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:25.611 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.870 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.870 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.870 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.870 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:25.870 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:26.437 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.437 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.437 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.437 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.697 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.264 00:15:27.264 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.264 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.264 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.524 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.524 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.524 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.524 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.524 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.524 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.524 { 00:15:27.524 "cntlid": 41, 00:15:27.524 "qid": 0, 00:15:27.524 "state": "enabled", 00:15:27.524 "thread": "nvmf_tgt_poll_group_000", 00:15:27.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:27.524 "listen_address": { 00:15:27.524 "trtype": "TCP", 00:15:27.524 "adrfam": "IPv4", 00:15:27.524 "traddr": "10.0.0.2", 00:15:27.524 "trsvcid": "4420" 00:15:27.524 }, 00:15:27.524 "peer_address": { 00:15:27.524 "trtype": "TCP", 00:15:27.524 "adrfam": "IPv4", 00:15:27.524 "traddr": "10.0.0.1", 00:15:27.524 "trsvcid": "36082" 00:15:27.524 }, 00:15:27.524 "auth": { 00:15:27.524 "state": "completed", 00:15:27.524 "digest": "sha256", 00:15:27.524 "dhgroup": "ffdhe8192" 00:15:27.524 } 00:15:27.524 } 00:15:27.524 ]' 00:15:27.524 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.524 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.524 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.524 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:27.524 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.524 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.524 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.524 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.783 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:27.783 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:28.352 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.352 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.352 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.352 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.352 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.352 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.352 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:28.352 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:28.611 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:28.611 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.611 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.611 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:28.611 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:28.611 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.611 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.611 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.611 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.611 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.611 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.611 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.611 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.180 00:15:29.180 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.180 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.180 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.180 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.180 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.180 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.180 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.439 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.439 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.439 { 00:15:29.439 "cntlid": 43, 00:15:29.439 "qid": 0, 00:15:29.439 "state": "enabled", 00:15:29.439 "thread": "nvmf_tgt_poll_group_000", 00:15:29.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:29.439 "listen_address": { 00:15:29.439 "trtype": "TCP", 00:15:29.439 "adrfam": "IPv4", 00:15:29.439 "traddr": "10.0.0.2", 00:15:29.439 "trsvcid": "4420" 00:15:29.439 }, 00:15:29.439 "peer_address": { 00:15:29.439 "trtype": "TCP", 00:15:29.439 "adrfam": "IPv4", 00:15:29.439 "traddr": "10.0.0.1", 00:15:29.439 "trsvcid": "36122" 00:15:29.439 }, 00:15:29.439 "auth": { 00:15:29.439 "state": "completed", 00:15:29.439 "digest": "sha256", 00:15:29.439 "dhgroup": "ffdhe8192" 00:15:29.439 } 00:15:29.439 } 00:15:29.439 ]' 00:15:29.439 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.439 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.439 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.439 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:29.439 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.439 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.439 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.440 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.699 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:29.699 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:30.267 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.267 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.267 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.267 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.267 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.267 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.267 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:30.267 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:30.527 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:30.527 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.527 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.527 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:30.527 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.527 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.527 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.527 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.527 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.527 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.527 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.527 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.527 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.786 00:15:30.786 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.786 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.786 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.045 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.045 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.045 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.045 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.045 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.045 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.045 { 00:15:31.045 "cntlid": 45, 00:15:31.045 "qid": 0, 00:15:31.045 "state": "enabled", 00:15:31.045 "thread": "nvmf_tgt_poll_group_000", 00:15:31.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:31.045 "listen_address": { 00:15:31.045 "trtype": "TCP", 00:15:31.045 "adrfam": "IPv4", 00:15:31.045 "traddr": "10.0.0.2", 00:15:31.045 "trsvcid": "4420" 00:15:31.045 }, 00:15:31.045 "peer_address": { 00:15:31.045 "trtype": "TCP", 00:15:31.045 "adrfam": "IPv4", 00:15:31.045 "traddr": "10.0.0.1", 00:15:31.045 "trsvcid": "36138" 00:15:31.045 }, 00:15:31.045 "auth": { 00:15:31.045 "state": "completed", 00:15:31.045 "digest": "sha256", 00:15:31.045 "dhgroup": "ffdhe8192" 00:15:31.045 } 00:15:31.045 } 00:15:31.045 ]' 00:15:31.045 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.045 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.045 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.304 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:31.304 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.304 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.304 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.304 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.563 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:31.563 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:32.131 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.131 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:32.131 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.131 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.131 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.131 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.131 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:32.131 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:32.131 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:32.131 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.131 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.131 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:32.132 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:32.132 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.132 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:32.132 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.132 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.132 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.132 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:32.132 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.132 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.700 00:15:32.700 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.700 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.700 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.960 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.960 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.960 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.960 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.960 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.960 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.960 { 00:15:32.960 "cntlid": 47, 00:15:32.960 "qid": 0, 00:15:32.960 "state": "enabled", 00:15:32.960 "thread": "nvmf_tgt_poll_group_000", 00:15:32.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:32.960 "listen_address": { 00:15:32.960 "trtype": "TCP", 00:15:32.960 "adrfam": "IPv4", 00:15:32.960 "traddr": "10.0.0.2", 00:15:32.960 "trsvcid": "4420" 00:15:32.960 }, 00:15:32.960 "peer_address": { 00:15:32.960 "trtype": "TCP", 00:15:32.960 "adrfam": "IPv4", 00:15:32.960 "traddr": "10.0.0.1", 00:15:32.960 "trsvcid": "36150" 00:15:32.960 }, 00:15:32.960 "auth": { 00:15:32.960 "state": "completed", 00:15:32.960 "digest": "sha256", 00:15:32.960 "dhgroup": "ffdhe8192" 00:15:32.960 } 00:15:32.960 } 00:15:32.960 ]' 00:15:32.960 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.960 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.960 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.960 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:32.960 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.220 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.220 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.220 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.220 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:33.220 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:33.789 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.789 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.789 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.789 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.789 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.789 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:33.789 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.789 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.789 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.789 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:34.049 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:34.049 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.049 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:34.049 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:34.049 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:34.049 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.049 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.049 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.049 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.049 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.049 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.049 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.049 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.308 00:15:34.308 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.308 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.308 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.567 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.567 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.567 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.567 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.567 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.567 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.567 { 00:15:34.567 "cntlid": 49, 00:15:34.567 "qid": 0, 00:15:34.567 "state": "enabled", 00:15:34.567 "thread": "nvmf_tgt_poll_group_000", 00:15:34.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:34.567 "listen_address": { 00:15:34.567 "trtype": "TCP", 00:15:34.567 "adrfam": "IPv4", 00:15:34.567 "traddr": "10.0.0.2", 00:15:34.567 "trsvcid": "4420" 00:15:34.567 }, 00:15:34.567 "peer_address": { 00:15:34.567 "trtype": "TCP", 00:15:34.567 "adrfam": "IPv4", 00:15:34.567 "traddr": "10.0.0.1", 00:15:34.567 "trsvcid": "36184" 00:15:34.567 }, 00:15:34.567 "auth": { 00:15:34.567 "state": "completed", 00:15:34.567 "digest": "sha384", 00:15:34.567 "dhgroup": "null" 00:15:34.567 } 00:15:34.567 } 00:15:34.567 ]' 00:15:34.567 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.567 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.567 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.567 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:34.567 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.567 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.567 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.567 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.827 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:34.827 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:35.396 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.396 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:35.396 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.396 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.396 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.396 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.396 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:35.396 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:35.656 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:35.656 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.656 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:35.656 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:35.656 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.656 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.656 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.656 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.656 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.656 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.656 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.656 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.656 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.915 00:15:35.915 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.915 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.915 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.175 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.175 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.175 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.175 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.175 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.175 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.175 { 00:15:36.175 "cntlid": 51, 00:15:36.175 "qid": 0, 00:15:36.175 "state": "enabled", 00:15:36.175 "thread": "nvmf_tgt_poll_group_000", 00:15:36.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:36.175 "listen_address": { 00:15:36.175 "trtype": "TCP", 00:15:36.175 "adrfam": "IPv4", 00:15:36.175 "traddr": "10.0.0.2", 00:15:36.175 "trsvcid": "4420" 00:15:36.175 }, 00:15:36.175 "peer_address": { 00:15:36.175 "trtype": "TCP", 00:15:36.175 "adrfam": "IPv4", 00:15:36.175 "traddr": "10.0.0.1", 00:15:36.175 "trsvcid": "55976" 00:15:36.175 }, 00:15:36.175 "auth": { 00:15:36.175 "state": "completed", 00:15:36.175 "digest": "sha384", 00:15:36.175 "dhgroup": "null" 00:15:36.175 } 00:15:36.175 } 00:15:36.175 ]' 00:15:36.175 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.175 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.175 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.175 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:36.175 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.175 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.175 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.175 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.434 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:36.434 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:37.003 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.003 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:37.003 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.003 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.003 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.003 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.003 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:37.003 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:37.263 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:37.263 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.263 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:37.263 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:37.263 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:37.263 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.263 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.263 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.263 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.263 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.263 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.263 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.263 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.522 00:15:37.522 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.522 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.522 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.782 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.782 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.782 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.782 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.782 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.782 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.782 { 00:15:37.782 "cntlid": 53, 00:15:37.782 "qid": 0, 00:15:37.782 "state": "enabled", 00:15:37.782 "thread": "nvmf_tgt_poll_group_000", 00:15:37.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:37.782 "listen_address": { 00:15:37.782 "trtype": "TCP", 00:15:37.782 "adrfam": "IPv4", 00:15:37.782 "traddr": "10.0.0.2", 00:15:37.782 "trsvcid": "4420" 00:15:37.782 }, 00:15:37.782 "peer_address": { 00:15:37.782 "trtype": "TCP", 00:15:37.782 "adrfam": "IPv4", 00:15:37.782 "traddr": "10.0.0.1", 00:15:37.782 "trsvcid": "56000" 00:15:37.782 }, 00:15:37.782 "auth": { 00:15:37.782 "state": "completed", 00:15:37.782 "digest": "sha384", 00:15:37.782 "dhgroup": "null" 00:15:37.782 } 00:15:37.782 } 00:15:37.782 ]' 00:15:37.782 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.782 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.782 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.782 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:37.782 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.782 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.782 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.782 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.041 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:38.041 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:38.610 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.610 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.611 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.611 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.611 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.611 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.611 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:38.611 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:38.870 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:38.870 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.870 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:38.870 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:38.870 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.870 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.870 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:38.870 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.870 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.870 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.870 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.870 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.870 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.130 00:15:39.130 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.130 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.130 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.390 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.390 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.390 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.390 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.390 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.390 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.390 { 00:15:39.390 "cntlid": 55, 00:15:39.390 "qid": 0, 00:15:39.390 "state": "enabled", 00:15:39.390 "thread": "nvmf_tgt_poll_group_000", 00:15:39.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:39.390 "listen_address": { 00:15:39.390 "trtype": "TCP", 00:15:39.390 "adrfam": "IPv4", 00:15:39.390 "traddr": "10.0.0.2", 00:15:39.390 "trsvcid": "4420" 00:15:39.390 }, 00:15:39.390 "peer_address": { 00:15:39.390 "trtype": "TCP", 00:15:39.390 "adrfam": "IPv4", 00:15:39.390 "traddr": "10.0.0.1", 00:15:39.390 "trsvcid": "56044" 00:15:39.390 }, 00:15:39.390 "auth": { 00:15:39.390 "state": "completed", 00:15:39.390 "digest": "sha384", 00:15:39.390 "dhgroup": "null" 00:15:39.390 } 00:15:39.390 } 00:15:39.390 ]' 00:15:39.390 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.390 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.390 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.390 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:39.390 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.390 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.390 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.390 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.649 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:39.649 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:40.218 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.218 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.218 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.218 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.218 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.218 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.218 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.218 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:40.218 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:40.478 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:40.478 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.478 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:40.478 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:40.478 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:40.478 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.478 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.478 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.478 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.478 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.478 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.478 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.478 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.737 00:15:40.737 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.737 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.737 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.737 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.737 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.737 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.737 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.997 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.997 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.997 { 00:15:40.997 "cntlid": 57, 00:15:40.997 "qid": 0, 00:15:40.997 "state": "enabled", 00:15:40.997 "thread": "nvmf_tgt_poll_group_000", 00:15:40.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:40.997 "listen_address": { 00:15:40.997 "trtype": "TCP", 00:15:40.997 "adrfam": "IPv4", 00:15:40.997 "traddr": "10.0.0.2", 00:15:40.997 "trsvcid": "4420" 00:15:40.997 }, 00:15:40.997 "peer_address": { 00:15:40.997 "trtype": "TCP", 00:15:40.997 "adrfam": "IPv4", 00:15:40.997 "traddr": "10.0.0.1", 00:15:40.997 "trsvcid": "56066" 00:15:40.997 }, 00:15:40.997 "auth": { 00:15:40.997 "state": "completed", 00:15:40.997 "digest": "sha384", 00:15:40.997 "dhgroup": "ffdhe2048" 00:15:40.997 } 00:15:40.997 } 00:15:40.997 ]' 00:15:40.997 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.997 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.997 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.997 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:40.997 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.997 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.997 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.997 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.257 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:41.257 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:41.825 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.825 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:41.825 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.825 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.825 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.825 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.825 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:41.825 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:42.085 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:42.085 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.085 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.085 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:42.085 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:42.085 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.085 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.085 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.085 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.085 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.085 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.085 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.085 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.345 00:15:42.345 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.345 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.345 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.345 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.345 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.345 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.345 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.604 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.604 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.604 { 00:15:42.604 "cntlid": 59, 00:15:42.604 "qid": 0, 00:15:42.604 "state": "enabled", 00:15:42.604 "thread": "nvmf_tgt_poll_group_000", 00:15:42.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:42.604 "listen_address": { 00:15:42.604 "trtype": "TCP", 00:15:42.604 "adrfam": "IPv4", 00:15:42.604 "traddr": "10.0.0.2", 00:15:42.604 "trsvcid": "4420" 00:15:42.604 }, 00:15:42.604 "peer_address": { 00:15:42.604 "trtype": "TCP", 00:15:42.604 "adrfam": "IPv4", 00:15:42.604 "traddr": "10.0.0.1", 00:15:42.604 "trsvcid": "56100" 00:15:42.604 }, 00:15:42.604 "auth": { 00:15:42.604 "state": "completed", 00:15:42.604 "digest": "sha384", 00:15:42.604 "dhgroup": "ffdhe2048" 00:15:42.604 } 00:15:42.604 } 00:15:42.604 ]' 00:15:42.604 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.604 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.604 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.604 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.604 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.604 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.604 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.604 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.863 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:42.863 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:43.432 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.432 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.432 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.432 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.432 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.432 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.432 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:43.432 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:43.691 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:43.691 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.691 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.691 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:43.691 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:43.691 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.691 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.691 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.691 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.691 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.691 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.691 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.691 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.951 00:15:43.951 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.951 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.951 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.951 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.951 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.951 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.951 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.951 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.951 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.951 { 00:15:43.951 "cntlid": 61, 00:15:43.951 "qid": 0, 00:15:43.951 "state": "enabled", 00:15:43.951 "thread": "nvmf_tgt_poll_group_000", 00:15:43.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:43.951 "listen_address": { 00:15:43.951 "trtype": "TCP", 00:15:43.951 "adrfam": "IPv4", 00:15:43.951 "traddr": "10.0.0.2", 00:15:43.951 "trsvcid": "4420" 00:15:43.951 }, 00:15:43.951 "peer_address": { 00:15:43.951 "trtype": "TCP", 00:15:43.951 "adrfam": "IPv4", 00:15:43.951 "traddr": "10.0.0.1", 00:15:43.951 "trsvcid": "56134" 00:15:43.951 }, 00:15:43.951 "auth": { 00:15:43.951 "state": "completed", 00:15:43.951 "digest": "sha384", 00:15:43.951 "dhgroup": "ffdhe2048" 00:15:43.951 } 00:15:43.951 } 00:15:43.951 ]' 00:15:43.951 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.210 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.210 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.210 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:44.210 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.210 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.210 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.210 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.468 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:44.468 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:45.037 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.037 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.037 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.037 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.037 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.037 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.037 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:45.037 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:45.296 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:45.296 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.296 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.296 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:45.296 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:45.296 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.296 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:45.296 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.296 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.296 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.296 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:45.296 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.296 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.555 00:15:45.555 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.555 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.555 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.555 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.555 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.555 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.555 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.555 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.555 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.555 { 00:15:45.555 "cntlid": 63, 00:15:45.555 "qid": 0, 00:15:45.555 "state": "enabled", 00:15:45.555 "thread": "nvmf_tgt_poll_group_000", 00:15:45.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:45.555 "listen_address": { 00:15:45.555 "trtype": "TCP", 00:15:45.555 "adrfam": "IPv4", 00:15:45.555 "traddr": "10.0.0.2", 00:15:45.555 "trsvcid": "4420" 00:15:45.555 }, 00:15:45.555 "peer_address": { 00:15:45.555 "trtype": "TCP", 00:15:45.555 "adrfam": "IPv4", 00:15:45.555 "traddr": "10.0.0.1", 00:15:45.555 "trsvcid": "56164" 00:15:45.555 }, 00:15:45.555 "auth": { 00:15:45.555 "state": "completed", 00:15:45.555 "digest": "sha384", 00:15:45.555 "dhgroup": "ffdhe2048" 00:15:45.555 } 00:15:45.555 } 00:15:45.555 ]' 00:15:45.555 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.814 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.814 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.814 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.814 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.814 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.814 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.814 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.074 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:46.074 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:46.643 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.643 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.643 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.643 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.643 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.643 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.643 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.643 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:46.643 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:46.901 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:46.901 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.901 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.901 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:46.901 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:46.901 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.901 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.901 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.901 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.901 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.901 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.901 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.901 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.160 00:15:47.160 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.160 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.160 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.160 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.160 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.160 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.160 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.160 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.160 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.160 { 00:15:47.160 "cntlid": 65, 00:15:47.160 "qid": 0, 00:15:47.160 "state": "enabled", 00:15:47.160 "thread": "nvmf_tgt_poll_group_000", 00:15:47.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:47.160 "listen_address": { 00:15:47.160 "trtype": "TCP", 00:15:47.160 "adrfam": "IPv4", 00:15:47.160 "traddr": "10.0.0.2", 00:15:47.160 "trsvcid": "4420" 00:15:47.160 }, 00:15:47.160 "peer_address": { 00:15:47.160 "trtype": "TCP", 00:15:47.161 "adrfam": "IPv4", 00:15:47.161 "traddr": "10.0.0.1", 00:15:47.161 "trsvcid": "57416" 00:15:47.161 }, 00:15:47.161 "auth": { 00:15:47.161 "state": "completed", 00:15:47.161 "digest": "sha384", 00:15:47.161 "dhgroup": "ffdhe3072" 00:15:47.161 } 00:15:47.161 } 00:15:47.161 ]' 00:15:47.161 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.420 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.420 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.420 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:47.420 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.420 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.420 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.420 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.679 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:47.680 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.249 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.509 00:15:48.768 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.768 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.768 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.768 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.768 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.768 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.768 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.768 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.768 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.768 { 00:15:48.768 "cntlid": 67, 00:15:48.768 "qid": 0, 00:15:48.768 "state": "enabled", 00:15:48.768 "thread": "nvmf_tgt_poll_group_000", 00:15:48.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:48.768 "listen_address": { 00:15:48.768 "trtype": "TCP", 00:15:48.768 "adrfam": "IPv4", 00:15:48.768 "traddr": "10.0.0.2", 00:15:48.768 "trsvcid": "4420" 00:15:48.768 }, 00:15:48.768 "peer_address": { 00:15:48.768 "trtype": "TCP", 00:15:48.768 "adrfam": "IPv4", 00:15:48.768 "traddr": "10.0.0.1", 00:15:48.768 "trsvcid": "57430" 00:15:48.768 }, 00:15:48.768 "auth": { 00:15:48.768 "state": "completed", 00:15:48.769 "digest": "sha384", 00:15:48.769 "dhgroup": "ffdhe3072" 00:15:48.769 } 00:15:48.769 } 00:15:48.769 ]' 00:15:48.769 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.028 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.028 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.028 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.028 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.028 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.028 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.028 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.287 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:49.287 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.856 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.857 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.857 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.116 00:15:50.116 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.116 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.116 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.375 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.375 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.375 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.375 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.375 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.375 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.375 { 00:15:50.375 "cntlid": 69, 00:15:50.375 "qid": 0, 00:15:50.375 "state": "enabled", 00:15:50.375 "thread": "nvmf_tgt_poll_group_000", 00:15:50.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:50.375 "listen_address": { 00:15:50.375 "trtype": "TCP", 00:15:50.375 "adrfam": "IPv4", 00:15:50.375 "traddr": "10.0.0.2", 00:15:50.375 "trsvcid": "4420" 00:15:50.375 }, 00:15:50.375 "peer_address": { 00:15:50.375 "trtype": "TCP", 00:15:50.376 "adrfam": "IPv4", 00:15:50.376 "traddr": "10.0.0.1", 00:15:50.376 "trsvcid": "57452" 00:15:50.376 }, 00:15:50.376 "auth": { 00:15:50.376 "state": "completed", 00:15:50.376 "digest": "sha384", 00:15:50.376 "dhgroup": "ffdhe3072" 00:15:50.376 } 00:15:50.376 } 00:15:50.376 ]' 00:15:50.376 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.376 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.376 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.635 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.635 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.635 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.635 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.635 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.894 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:50.894 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:51.463 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.463 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.463 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.463 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.463 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.463 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.463 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:51.463 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:51.463 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:51.463 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.463 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.463 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:51.463 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:51.463 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.463 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:51.463 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.463 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.463 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.463 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:51.463 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.463 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.721 00:15:51.981 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.981 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.981 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.981 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.981 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.981 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.981 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.981 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.981 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.981 { 00:15:51.981 "cntlid": 71, 00:15:51.981 "qid": 0, 00:15:51.981 "state": "enabled", 00:15:51.981 "thread": "nvmf_tgt_poll_group_000", 00:15:51.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:51.981 "listen_address": { 00:15:51.981 "trtype": "TCP", 00:15:51.981 "adrfam": "IPv4", 00:15:51.981 "traddr": "10.0.0.2", 00:15:51.981 "trsvcid": "4420" 00:15:51.981 }, 00:15:51.981 "peer_address": { 00:15:51.981 "trtype": "TCP", 00:15:51.981 "adrfam": "IPv4", 00:15:51.981 "traddr": "10.0.0.1", 00:15:51.981 "trsvcid": "57496" 00:15:51.981 }, 00:15:51.981 "auth": { 00:15:51.981 "state": "completed", 00:15:51.981 "digest": "sha384", 00:15:51.981 "dhgroup": "ffdhe3072" 00:15:51.981 } 00:15:51.981 } 00:15:51.981 ]' 00:15:51.981 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.981 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.981 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.240 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.240 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.240 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.240 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.240 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.240 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:52.240 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.178 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.437 00:15:53.437 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.437 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.438 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.697 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.697 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.697 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.697 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.697 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.697 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.697 { 00:15:53.697 "cntlid": 73, 00:15:53.697 "qid": 0, 00:15:53.697 "state": "enabled", 00:15:53.697 "thread": "nvmf_tgt_poll_group_000", 00:15:53.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:53.697 "listen_address": { 00:15:53.697 "trtype": "TCP", 00:15:53.697 "adrfam": "IPv4", 00:15:53.697 "traddr": "10.0.0.2", 00:15:53.697 "trsvcid": "4420" 00:15:53.697 }, 00:15:53.697 "peer_address": { 00:15:53.697 "trtype": "TCP", 00:15:53.697 "adrfam": "IPv4", 00:15:53.697 "traddr": "10.0.0.1", 00:15:53.697 "trsvcid": "57526" 00:15:53.697 }, 00:15:53.697 "auth": { 00:15:53.697 "state": "completed", 00:15:53.697 "digest": "sha384", 00:15:53.697 "dhgroup": "ffdhe4096" 00:15:53.697 } 00:15:53.697 } 00:15:53.697 ]' 00:15:53.697 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.697 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.697 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.697 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:53.697 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.697 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.697 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.697 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.957 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:53.957 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:15:54.526 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.526 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.527 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.527 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.527 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.527 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.527 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:54.527 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:54.788 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:54.788 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.788 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.788 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:54.788 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:54.788 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.788 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.788 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.788 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.788 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.788 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.788 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.788 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.048 00:15:55.048 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.048 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.048 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.308 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.308 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.308 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.308 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.308 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.308 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.308 { 00:15:55.308 "cntlid": 75, 00:15:55.308 "qid": 0, 00:15:55.308 "state": "enabled", 00:15:55.308 "thread": "nvmf_tgt_poll_group_000", 00:15:55.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:55.308 "listen_address": { 00:15:55.308 "trtype": "TCP", 00:15:55.308 "adrfam": "IPv4", 00:15:55.308 "traddr": "10.0.0.2", 00:15:55.308 "trsvcid": "4420" 00:15:55.308 }, 00:15:55.308 "peer_address": { 00:15:55.308 "trtype": "TCP", 00:15:55.308 "adrfam": "IPv4", 00:15:55.308 "traddr": "10.0.0.1", 00:15:55.308 "trsvcid": "57552" 00:15:55.308 }, 00:15:55.308 "auth": { 00:15:55.308 "state": "completed", 00:15:55.308 "digest": "sha384", 00:15:55.308 "dhgroup": "ffdhe4096" 00:15:55.308 } 00:15:55.308 } 00:15:55.308 ]' 00:15:55.308 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.308 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.308 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.308 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.308 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.308 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.308 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.308 12:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.567 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:55.567 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:15:56.134 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.134 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.134 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.134 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.134 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.134 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.134 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.134 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.393 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:56.393 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.393 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.393 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:56.393 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:56.393 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.393 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.393 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.393 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.393 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.393 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.393 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.393 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.652 00:15:56.652 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.652 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.652 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.912 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.912 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.912 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.912 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.912 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.912 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.912 { 00:15:56.912 "cntlid": 77, 00:15:56.912 "qid": 0, 00:15:56.912 "state": "enabled", 00:15:56.912 "thread": "nvmf_tgt_poll_group_000", 00:15:56.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:56.912 "listen_address": { 00:15:56.912 "trtype": "TCP", 00:15:56.912 "adrfam": "IPv4", 00:15:56.912 "traddr": "10.0.0.2", 00:15:56.912 "trsvcid": "4420" 00:15:56.912 }, 00:15:56.912 "peer_address": { 00:15:56.912 "trtype": "TCP", 00:15:56.912 "adrfam": "IPv4", 00:15:56.912 "traddr": "10.0.0.1", 00:15:56.912 "trsvcid": "57872" 00:15:56.912 }, 00:15:56.912 "auth": { 00:15:56.912 "state": "completed", 00:15:56.912 "digest": "sha384", 00:15:56.912 "dhgroup": "ffdhe4096" 00:15:56.912 } 00:15:56.912 } 00:15:56.912 ]' 00:15:56.912 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.912 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.912 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.912 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:56.912 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.912 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.912 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.912 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.171 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:57.171 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:15:57.739 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.739 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.739 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.739 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.739 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.739 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.739 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:57.739 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:57.999 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:57.999 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.999 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.999 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:57.999 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:57.999 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.999 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:57.999 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.999 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.999 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.999 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:57.999 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.999 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.258 00:15:58.258 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.258 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.258 12:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.517 { 00:15:58.517 "cntlid": 79, 00:15:58.517 "qid": 0, 00:15:58.517 "state": "enabled", 00:15:58.517 "thread": "nvmf_tgt_poll_group_000", 00:15:58.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:58.517 "listen_address": { 00:15:58.517 "trtype": "TCP", 00:15:58.517 "adrfam": "IPv4", 00:15:58.517 "traddr": "10.0.0.2", 00:15:58.517 "trsvcid": "4420" 00:15:58.517 }, 00:15:58.517 "peer_address": { 00:15:58.517 "trtype": "TCP", 00:15:58.517 "adrfam": "IPv4", 00:15:58.517 "traddr": "10.0.0.1", 00:15:58.517 "trsvcid": "57912" 00:15:58.517 }, 00:15:58.517 "auth": { 00:15:58.517 "state": "completed", 00:15:58.517 "digest": "sha384", 00:15:58.517 "dhgroup": "ffdhe4096" 00:15:58.517 } 00:15:58.517 } 00:15:58.517 ]' 00:15:58.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.517 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.776 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:58.776 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:15:59.345 12:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.345 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.345 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.345 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.345 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.345 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.345 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.345 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:59.345 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:59.605 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:59.605 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.605 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.605 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:59.605 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:59.605 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.605 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.605 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.605 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.605 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.605 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.605 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.605 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.865 00:16:00.124 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.124 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.124 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.124 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.124 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.124 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.124 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.124 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.124 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.124 { 00:16:00.124 "cntlid": 81, 00:16:00.124 "qid": 0, 00:16:00.124 "state": "enabled", 00:16:00.124 "thread": "nvmf_tgt_poll_group_000", 00:16:00.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.124 "listen_address": { 00:16:00.124 "trtype": "TCP", 00:16:00.124 "adrfam": "IPv4", 00:16:00.124 "traddr": "10.0.0.2", 00:16:00.124 "trsvcid": "4420" 00:16:00.124 }, 00:16:00.124 "peer_address": { 00:16:00.124 "trtype": "TCP", 00:16:00.124 "adrfam": "IPv4", 00:16:00.124 "traddr": "10.0.0.1", 00:16:00.124 "trsvcid": "57944" 00:16:00.124 }, 00:16:00.124 "auth": { 00:16:00.124 "state": "completed", 00:16:00.124 "digest": "sha384", 00:16:00.124 "dhgroup": "ffdhe6144" 00:16:00.124 } 00:16:00.124 } 00:16:00.124 ]' 00:16:00.124 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.384 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.384 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.384 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:00.384 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.384 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.384 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.384 12:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.643 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:00.643 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:01.212 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.212 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.212 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.212 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.212 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.212 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.212 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:01.212 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:01.472 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:01.472 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.472 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.472 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:01.472 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:01.472 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.472 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.472 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.472 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.472 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.472 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.472 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.472 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.731 00:16:01.731 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.731 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.731 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.990 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.990 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.991 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.991 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.991 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.991 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.991 { 00:16:01.991 "cntlid": 83, 00:16:01.991 "qid": 0, 00:16:01.991 "state": "enabled", 00:16:01.991 "thread": "nvmf_tgt_poll_group_000", 00:16:01.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:01.991 "listen_address": { 00:16:01.991 "trtype": "TCP", 00:16:01.991 "adrfam": "IPv4", 00:16:01.991 "traddr": "10.0.0.2", 00:16:01.991 "trsvcid": "4420" 00:16:01.991 }, 00:16:01.991 "peer_address": { 00:16:01.991 "trtype": "TCP", 00:16:01.991 "adrfam": "IPv4", 00:16:01.991 "traddr": "10.0.0.1", 00:16:01.991 "trsvcid": "57972" 00:16:01.991 }, 00:16:01.991 "auth": { 00:16:01.991 "state": "completed", 00:16:01.991 "digest": "sha384", 00:16:01.991 "dhgroup": "ffdhe6144" 00:16:01.991 } 00:16:01.991 } 00:16:01.991 ]' 00:16:01.991 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.991 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.991 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.991 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:01.991 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.991 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.991 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.991 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.250 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:02.250 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:02.818 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.818 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.818 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.818 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.818 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.818 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.818 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:02.818 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:03.077 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:03.077 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.077 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.077 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:03.077 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:03.077 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.077 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.077 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.077 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.077 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.077 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.077 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.077 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.338 00:16:03.338 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.338 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.338 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.597 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.597 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.597 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.597 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.597 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.597 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.597 { 00:16:03.597 "cntlid": 85, 00:16:03.597 "qid": 0, 00:16:03.597 "state": "enabled", 00:16:03.597 "thread": "nvmf_tgt_poll_group_000", 00:16:03.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:03.597 "listen_address": { 00:16:03.597 "trtype": "TCP", 00:16:03.597 "adrfam": "IPv4", 00:16:03.597 "traddr": "10.0.0.2", 00:16:03.597 "trsvcid": "4420" 00:16:03.597 }, 00:16:03.597 "peer_address": { 00:16:03.597 "trtype": "TCP", 00:16:03.597 "adrfam": "IPv4", 00:16:03.597 "traddr": "10.0.0.1", 00:16:03.597 "trsvcid": "57992" 00:16:03.597 }, 00:16:03.597 "auth": { 00:16:03.597 "state": "completed", 00:16:03.597 "digest": "sha384", 00:16:03.597 "dhgroup": "ffdhe6144" 00:16:03.597 } 00:16:03.597 } 00:16:03.597 ]' 00:16:03.597 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.597 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.597 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.856 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.856 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.856 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.856 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.856 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.114 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:04.114 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:04.683 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.683 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.683 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.683 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.683 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.683 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.683 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:04.683 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:04.942 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:04.942 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.942 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.942 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:04.942 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.942 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.942 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:04.942 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.942 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.942 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.942 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.942 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.942 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.202 00:16:05.202 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.202 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.202 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.461 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.461 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.461 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.461 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.461 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.461 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.461 { 00:16:05.461 "cntlid": 87, 00:16:05.461 "qid": 0, 00:16:05.461 "state": "enabled", 00:16:05.461 "thread": "nvmf_tgt_poll_group_000", 00:16:05.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.461 "listen_address": { 00:16:05.461 "trtype": "TCP", 00:16:05.461 "adrfam": "IPv4", 00:16:05.461 "traddr": "10.0.0.2", 00:16:05.461 "trsvcid": "4420" 00:16:05.461 }, 00:16:05.461 "peer_address": { 00:16:05.461 "trtype": "TCP", 00:16:05.461 "adrfam": "IPv4", 00:16:05.461 "traddr": "10.0.0.1", 00:16:05.461 "trsvcid": "58014" 00:16:05.461 }, 00:16:05.461 "auth": { 00:16:05.461 "state": "completed", 00:16:05.461 "digest": "sha384", 00:16:05.461 "dhgroup": "ffdhe6144" 00:16:05.461 } 00:16:05.461 } 00:16:05.461 ]' 00:16:05.461 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.461 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.461 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.461 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.461 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.461 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.461 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.461 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.720 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:05.720 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:06.288 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.288 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.288 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.288 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.288 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.288 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.288 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.288 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:06.288 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:06.547 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:06.547 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.547 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.547 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:06.547 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.547 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.547 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.547 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.547 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.547 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.547 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.547 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.547 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.115 00:16:07.115 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.115 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.115 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.115 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.115 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.115 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.115 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.374 { 00:16:07.374 "cntlid": 89, 00:16:07.374 "qid": 0, 00:16:07.374 "state": "enabled", 00:16:07.374 "thread": "nvmf_tgt_poll_group_000", 00:16:07.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:07.374 "listen_address": { 00:16:07.374 "trtype": "TCP", 00:16:07.374 "adrfam": "IPv4", 00:16:07.374 "traddr": "10.0.0.2", 00:16:07.374 "trsvcid": "4420" 00:16:07.374 }, 00:16:07.374 "peer_address": { 00:16:07.374 "trtype": "TCP", 00:16:07.374 "adrfam": "IPv4", 00:16:07.374 "traddr": "10.0.0.1", 00:16:07.374 "trsvcid": "60310" 00:16:07.374 }, 00:16:07.374 "auth": { 00:16:07.374 "state": "completed", 00:16:07.374 "digest": "sha384", 00:16:07.374 "dhgroup": "ffdhe8192" 00:16:07.374 } 00:16:07.374 } 00:16:07.374 ]' 00:16:07.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:07.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.633 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:07.633 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:08.202 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.202 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.202 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.202 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.202 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.202 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.202 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.202 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.462 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:08.462 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.462 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.462 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:08.462 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:08.462 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.462 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.462 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.462 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.462 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.462 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.462 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.462 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.030 00:16:09.030 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.030 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.030 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.031 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.031 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.031 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.031 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.031 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.031 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.031 { 00:16:09.031 "cntlid": 91, 00:16:09.031 "qid": 0, 00:16:09.031 "state": "enabled", 00:16:09.031 "thread": "nvmf_tgt_poll_group_000", 00:16:09.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:09.031 "listen_address": { 00:16:09.031 "trtype": "TCP", 00:16:09.031 "adrfam": "IPv4", 00:16:09.031 "traddr": "10.0.0.2", 00:16:09.031 "trsvcid": "4420" 00:16:09.031 }, 00:16:09.031 "peer_address": { 00:16:09.031 "trtype": "TCP", 00:16:09.031 "adrfam": "IPv4", 00:16:09.031 "traddr": "10.0.0.1", 00:16:09.031 "trsvcid": "60334" 00:16:09.031 }, 00:16:09.031 "auth": { 00:16:09.031 "state": "completed", 00:16:09.031 "digest": "sha384", 00:16:09.031 "dhgroup": "ffdhe8192" 00:16:09.031 } 00:16:09.031 } 00:16:09.031 ]' 00:16:09.031 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.031 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.031 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.290 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:09.290 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.290 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.290 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.290 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.290 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:09.549 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.118 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.686 00:16:10.686 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.686 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.686 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.945 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.945 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.945 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.945 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.945 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.945 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.945 { 00:16:10.945 "cntlid": 93, 00:16:10.945 "qid": 0, 00:16:10.945 "state": "enabled", 00:16:10.945 "thread": "nvmf_tgt_poll_group_000", 00:16:10.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.945 "listen_address": { 00:16:10.945 "trtype": "TCP", 00:16:10.945 "adrfam": "IPv4", 00:16:10.945 "traddr": "10.0.0.2", 00:16:10.945 "trsvcid": "4420" 00:16:10.945 }, 00:16:10.945 "peer_address": { 00:16:10.945 "trtype": "TCP", 00:16:10.945 "adrfam": "IPv4", 00:16:10.945 "traddr": "10.0.0.1", 00:16:10.946 "trsvcid": "60350" 00:16:10.946 }, 00:16:10.946 "auth": { 00:16:10.946 "state": "completed", 00:16:10.946 "digest": "sha384", 00:16:10.946 "dhgroup": "ffdhe8192" 00:16:10.946 } 00:16:10.946 } 00:16:10.946 ]' 00:16:10.946 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.946 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.946 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.946 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.946 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.946 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.946 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.946 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.205 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:11.205 12:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:11.773 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.773 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.773 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.773 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.773 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.773 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.773 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:11.773 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:12.032 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:12.032 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.032 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:12.032 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:12.032 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:12.032 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.032 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:12.032 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.032 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.032 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.032 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:12.032 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.032 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.600 00:16:12.600 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.600 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.600 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.858 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.858 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.858 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.859 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.859 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.859 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.859 { 00:16:12.859 "cntlid": 95, 00:16:12.859 "qid": 0, 00:16:12.859 "state": "enabled", 00:16:12.859 "thread": "nvmf_tgt_poll_group_000", 00:16:12.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:12.859 "listen_address": { 00:16:12.859 "trtype": "TCP", 00:16:12.859 "adrfam": "IPv4", 00:16:12.859 "traddr": "10.0.0.2", 00:16:12.859 "trsvcid": "4420" 00:16:12.859 }, 00:16:12.859 "peer_address": { 00:16:12.859 "trtype": "TCP", 00:16:12.859 "adrfam": "IPv4", 00:16:12.859 "traddr": "10.0.0.1", 00:16:12.859 "trsvcid": "60388" 00:16:12.859 }, 00:16:12.859 "auth": { 00:16:12.859 "state": "completed", 00:16:12.859 "digest": "sha384", 00:16:12.859 "dhgroup": "ffdhe8192" 00:16:12.859 } 00:16:12.859 } 00:16:12.859 ]' 00:16:12.859 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.859 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.859 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.859 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.859 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.859 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.859 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.859 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.118 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:13.118 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:13.686 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.686 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.686 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.686 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.687 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.687 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:13.687 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.687 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.687 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.687 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.946 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:13.946 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.946 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.946 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:13.946 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:13.946 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.946 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.946 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.946 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.946 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.946 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.946 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.946 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.205 00:16:14.205 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.205 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.205 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.464 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.464 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.464 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.464 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.464 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.464 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.464 { 00:16:14.464 "cntlid": 97, 00:16:14.464 "qid": 0, 00:16:14.464 "state": "enabled", 00:16:14.464 "thread": "nvmf_tgt_poll_group_000", 00:16:14.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:14.464 "listen_address": { 00:16:14.464 "trtype": "TCP", 00:16:14.464 "adrfam": "IPv4", 00:16:14.464 "traddr": "10.0.0.2", 00:16:14.464 "trsvcid": "4420" 00:16:14.464 }, 00:16:14.464 "peer_address": { 00:16:14.464 "trtype": "TCP", 00:16:14.464 "adrfam": "IPv4", 00:16:14.464 "traddr": "10.0.0.1", 00:16:14.464 "trsvcid": "60398" 00:16:14.464 }, 00:16:14.464 "auth": { 00:16:14.464 "state": "completed", 00:16:14.464 "digest": "sha512", 00:16:14.464 "dhgroup": "null" 00:16:14.464 } 00:16:14.464 } 00:16:14.464 ]' 00:16:14.464 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.464 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.464 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.464 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:14.464 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.464 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.464 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.464 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.723 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:14.723 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:15.292 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.292 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.292 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.292 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.292 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.292 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.292 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:15.292 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:15.551 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:15.551 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.551 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.551 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:15.551 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:15.551 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.551 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.551 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.551 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.551 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.551 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.551 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.551 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.811 00:16:15.811 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.811 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.811 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.811 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.811 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.811 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.811 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.811 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.811 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.811 { 00:16:15.811 "cntlid": 99, 00:16:15.811 "qid": 0, 00:16:15.811 "state": "enabled", 00:16:15.811 "thread": "nvmf_tgt_poll_group_000", 00:16:15.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.811 "listen_address": { 00:16:15.811 "trtype": "TCP", 00:16:15.811 "adrfam": "IPv4", 00:16:15.811 "traddr": "10.0.0.2", 00:16:15.811 "trsvcid": "4420" 00:16:15.811 }, 00:16:15.811 "peer_address": { 00:16:15.811 "trtype": "TCP", 00:16:15.811 "adrfam": "IPv4", 00:16:15.811 "traddr": "10.0.0.1", 00:16:15.811 "trsvcid": "60432" 00:16:15.811 }, 00:16:15.811 "auth": { 00:16:15.811 "state": "completed", 00:16:15.811 "digest": "sha512", 00:16:15.811 "dhgroup": "null" 00:16:15.811 } 00:16:15.811 } 00:16:15.811 ]' 00:16:15.811 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.070 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.070 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.070 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:16.070 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.070 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.070 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.070 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.330 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:16.330 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:16.898 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.898 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.898 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.898 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.898 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.898 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.898 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:16.898 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:17.158 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:17.158 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.158 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:17.158 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:17.158 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:17.158 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.158 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.158 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.158 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.158 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.158 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.158 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.158 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.158 00:16:17.417 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.417 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.417 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.417 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.417 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.417 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.417 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.417 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.417 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.417 { 00:16:17.417 "cntlid": 101, 00:16:17.417 "qid": 0, 00:16:17.417 "state": "enabled", 00:16:17.417 "thread": "nvmf_tgt_poll_group_000", 00:16:17.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.417 "listen_address": { 00:16:17.417 "trtype": "TCP", 00:16:17.417 "adrfam": "IPv4", 00:16:17.417 "traddr": "10.0.0.2", 00:16:17.417 "trsvcid": "4420" 00:16:17.417 }, 00:16:17.417 "peer_address": { 00:16:17.417 "trtype": "TCP", 00:16:17.417 "adrfam": "IPv4", 00:16:17.417 "traddr": "10.0.0.1", 00:16:17.417 "trsvcid": "47466" 00:16:17.417 }, 00:16:17.417 "auth": { 00:16:17.417 "state": "completed", 00:16:17.417 "digest": "sha512", 00:16:17.417 "dhgroup": "null" 00:16:17.417 } 00:16:17.417 } 00:16:17.417 ]' 00:16:17.417 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.677 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.677 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.677 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:17.677 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.677 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.677 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.677 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.936 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:17.936 12:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:18.503 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.503 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.503 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.503 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.503 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.503 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.503 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:18.503 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:18.763 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:18.763 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.763 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.763 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:18.763 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.763 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.763 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:18.763 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.763 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.763 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.763 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.763 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.763 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.022 00:16:19.022 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.022 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.022 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.022 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.022 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.022 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.022 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.282 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.282 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.282 { 00:16:19.282 "cntlid": 103, 00:16:19.282 "qid": 0, 00:16:19.282 "state": "enabled", 00:16:19.282 "thread": "nvmf_tgt_poll_group_000", 00:16:19.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:19.282 "listen_address": { 00:16:19.282 "trtype": "TCP", 00:16:19.282 "adrfam": "IPv4", 00:16:19.282 "traddr": "10.0.0.2", 00:16:19.282 "trsvcid": "4420" 00:16:19.282 }, 00:16:19.282 "peer_address": { 00:16:19.282 "trtype": "TCP", 00:16:19.282 "adrfam": "IPv4", 00:16:19.282 "traddr": "10.0.0.1", 00:16:19.282 "trsvcid": "47494" 00:16:19.282 }, 00:16:19.282 "auth": { 00:16:19.282 "state": "completed", 00:16:19.282 "digest": "sha512", 00:16:19.282 "dhgroup": "null" 00:16:19.282 } 00:16:19.282 } 00:16:19.282 ]' 00:16:19.282 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.282 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.282 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.282 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:19.282 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.282 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.282 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.282 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.541 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:19.541 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:20.109 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.109 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.109 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.109 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.109 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.109 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.109 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.109 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:20.109 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:20.369 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:20.369 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.369 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.369 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:20.369 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.369 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.369 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.369 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.369 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.369 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.369 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.369 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.369 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.629 00:16:20.629 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.629 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.629 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.629 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.629 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.629 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.629 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.889 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.889 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.889 { 00:16:20.889 "cntlid": 105, 00:16:20.889 "qid": 0, 00:16:20.889 "state": "enabled", 00:16:20.889 "thread": "nvmf_tgt_poll_group_000", 00:16:20.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:20.889 "listen_address": { 00:16:20.889 "trtype": "TCP", 00:16:20.889 "adrfam": "IPv4", 00:16:20.889 "traddr": "10.0.0.2", 00:16:20.889 "trsvcid": "4420" 00:16:20.889 }, 00:16:20.889 "peer_address": { 00:16:20.889 "trtype": "TCP", 00:16:20.889 "adrfam": "IPv4", 00:16:20.889 "traddr": "10.0.0.1", 00:16:20.889 "trsvcid": "47526" 00:16:20.889 }, 00:16:20.889 "auth": { 00:16:20.889 "state": "completed", 00:16:20.889 "digest": "sha512", 00:16:20.889 "dhgroup": "ffdhe2048" 00:16:20.889 } 00:16:20.889 } 00:16:20.889 ]' 00:16:20.889 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.889 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.889 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.889 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.889 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.889 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.889 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.889 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.148 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:21.148 12:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:21.715 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.715 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.715 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.715 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.715 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.715 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.716 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:21.716 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:21.975 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:21.975 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.975 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:21.975 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:21.975 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.975 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.975 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.975 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.975 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.975 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.975 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.975 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.975 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.234 00:16:22.234 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.234 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.234 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.234 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.234 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.234 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.234 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.234 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.234 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.234 { 00:16:22.234 "cntlid": 107, 00:16:22.234 "qid": 0, 00:16:22.234 "state": "enabled", 00:16:22.234 "thread": "nvmf_tgt_poll_group_000", 00:16:22.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.234 "listen_address": { 00:16:22.234 "trtype": "TCP", 00:16:22.234 "adrfam": "IPv4", 00:16:22.234 "traddr": "10.0.0.2", 00:16:22.234 "trsvcid": "4420" 00:16:22.234 }, 00:16:22.234 "peer_address": { 00:16:22.234 "trtype": "TCP", 00:16:22.234 "adrfam": "IPv4", 00:16:22.234 "traddr": "10.0.0.1", 00:16:22.234 "trsvcid": "47550" 00:16:22.234 }, 00:16:22.234 "auth": { 00:16:22.234 "state": "completed", 00:16:22.234 "digest": "sha512", 00:16:22.234 "dhgroup": "ffdhe2048" 00:16:22.234 } 00:16:22.234 } 00:16:22.234 ]' 00:16:22.234 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.493 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.493 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.493 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.493 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.493 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.493 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.493 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.753 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:22.753 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:23.321 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.321 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.321 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.321 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.321 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.321 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.321 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:23.321 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:23.580 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:23.580 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.580 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.580 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:23.580 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.580 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.580 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.580 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.580 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.580 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.580 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.580 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.580 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.840 00:16:23.840 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.840 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.840 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.840 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.840 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.840 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.840 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.099 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.099 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.099 { 00:16:24.099 "cntlid": 109, 00:16:24.099 "qid": 0, 00:16:24.099 "state": "enabled", 00:16:24.099 "thread": "nvmf_tgt_poll_group_000", 00:16:24.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:24.099 "listen_address": { 00:16:24.099 "trtype": "TCP", 00:16:24.099 "adrfam": "IPv4", 00:16:24.099 "traddr": "10.0.0.2", 00:16:24.099 "trsvcid": "4420" 00:16:24.099 }, 00:16:24.099 "peer_address": { 00:16:24.099 "trtype": "TCP", 00:16:24.099 "adrfam": "IPv4", 00:16:24.099 "traddr": "10.0.0.1", 00:16:24.099 "trsvcid": "47580" 00:16:24.099 }, 00:16:24.099 "auth": { 00:16:24.099 "state": "completed", 00:16:24.099 "digest": "sha512", 00:16:24.099 "dhgroup": "ffdhe2048" 00:16:24.099 } 00:16:24.099 } 00:16:24.099 ]' 00:16:24.099 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.099 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.099 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.099 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.099 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.099 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.099 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.099 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.358 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:24.358 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:24.926 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.926 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.926 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.926 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.926 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.926 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.926 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:24.926 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:25.185 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:25.185 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.185 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.185 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.185 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.185 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.185 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:25.185 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.185 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.185 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.185 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.185 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.186 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.456 00:16:25.456 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.456 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.456 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.456 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.456 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.456 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.456 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.456 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.456 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.456 { 00:16:25.456 "cntlid": 111, 00:16:25.456 "qid": 0, 00:16:25.456 "state": "enabled", 00:16:25.456 "thread": "nvmf_tgt_poll_group_000", 00:16:25.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.456 "listen_address": { 00:16:25.456 "trtype": "TCP", 00:16:25.456 "adrfam": "IPv4", 00:16:25.456 "traddr": "10.0.0.2", 00:16:25.456 "trsvcid": "4420" 00:16:25.456 }, 00:16:25.456 "peer_address": { 00:16:25.456 "trtype": "TCP", 00:16:25.456 "adrfam": "IPv4", 00:16:25.456 "traddr": "10.0.0.1", 00:16:25.456 "trsvcid": "47604" 00:16:25.456 }, 00:16:25.456 "auth": { 00:16:25.456 "state": "completed", 00:16:25.456 "digest": "sha512", 00:16:25.456 "dhgroup": "ffdhe2048" 00:16:25.456 } 00:16:25.456 } 00:16:25.456 ]' 00:16:25.456 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.716 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.716 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.716 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.716 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.716 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.716 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.716 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.975 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:25.975 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:26.544 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.544 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.803 00:16:26.803 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.803 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.803 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.062 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.062 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.062 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.062 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.062 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.062 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.062 { 00:16:27.062 "cntlid": 113, 00:16:27.062 "qid": 0, 00:16:27.062 "state": "enabled", 00:16:27.062 "thread": "nvmf_tgt_poll_group_000", 00:16:27.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:27.062 "listen_address": { 00:16:27.062 "trtype": "TCP", 00:16:27.062 "adrfam": "IPv4", 00:16:27.062 "traddr": "10.0.0.2", 00:16:27.062 "trsvcid": "4420" 00:16:27.062 }, 00:16:27.062 "peer_address": { 00:16:27.062 "trtype": "TCP", 00:16:27.062 "adrfam": "IPv4", 00:16:27.062 "traddr": "10.0.0.1", 00:16:27.062 "trsvcid": "38168" 00:16:27.062 }, 00:16:27.062 "auth": { 00:16:27.062 "state": "completed", 00:16:27.062 "digest": "sha512", 00:16:27.062 "dhgroup": "ffdhe3072" 00:16:27.062 } 00:16:27.062 } 00:16:27.062 ]' 00:16:27.062 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.062 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.062 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.322 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.322 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.322 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.322 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.322 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.322 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:27.322 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:27.890 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.890 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.890 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.890 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.149 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.149 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.149 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.149 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.149 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:28.149 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.149 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.149 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:28.149 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.149 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.149 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.149 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.149 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.150 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.150 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.150 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.150 12:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.409 00:16:28.409 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.409 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.409 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.669 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.669 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.669 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.669 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.669 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.669 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.669 { 00:16:28.669 "cntlid": 115, 00:16:28.669 "qid": 0, 00:16:28.669 "state": "enabled", 00:16:28.669 "thread": "nvmf_tgt_poll_group_000", 00:16:28.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.669 "listen_address": { 00:16:28.669 "trtype": "TCP", 00:16:28.669 "adrfam": "IPv4", 00:16:28.669 "traddr": "10.0.0.2", 00:16:28.669 "trsvcid": "4420" 00:16:28.669 }, 00:16:28.669 "peer_address": { 00:16:28.669 "trtype": "TCP", 00:16:28.669 "adrfam": "IPv4", 00:16:28.669 "traddr": "10.0.0.1", 00:16:28.669 "trsvcid": "38194" 00:16:28.669 }, 00:16:28.669 "auth": { 00:16:28.669 "state": "completed", 00:16:28.669 "digest": "sha512", 00:16:28.669 "dhgroup": "ffdhe3072" 00:16:28.669 } 00:16:28.669 } 00:16:28.669 ]' 00:16:28.669 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.669 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.669 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.669 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.669 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.929 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.929 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.929 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.929 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:28.929 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:29.496 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.496 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.496 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.496 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.496 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.496 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.496 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:29.496 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:29.755 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:29.755 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.755 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.755 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:29.755 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.755 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.755 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.755 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.755 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.755 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.755 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.755 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.755 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.014 00:16:30.014 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.014 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.014 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.274 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.274 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.274 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.274 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.274 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.274 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.274 { 00:16:30.274 "cntlid": 117, 00:16:30.274 "qid": 0, 00:16:30.274 "state": "enabled", 00:16:30.274 "thread": "nvmf_tgt_poll_group_000", 00:16:30.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.274 "listen_address": { 00:16:30.274 "trtype": "TCP", 00:16:30.274 "adrfam": "IPv4", 00:16:30.274 "traddr": "10.0.0.2", 00:16:30.274 "trsvcid": "4420" 00:16:30.274 }, 00:16:30.274 "peer_address": { 00:16:30.274 "trtype": "TCP", 00:16:30.274 "adrfam": "IPv4", 00:16:30.274 "traddr": "10.0.0.1", 00:16:30.274 "trsvcid": "38234" 00:16:30.274 }, 00:16:30.274 "auth": { 00:16:30.274 "state": "completed", 00:16:30.274 "digest": "sha512", 00:16:30.274 "dhgroup": "ffdhe3072" 00:16:30.274 } 00:16:30.274 } 00:16:30.274 ]' 00:16:30.274 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.274 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.274 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.274 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.274 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.534 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.534 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.534 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.534 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:30.534 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:31.101 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.102 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.102 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.102 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.102 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.102 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.102 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.102 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.361 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:31.361 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.361 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.361 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:31.361 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.361 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.361 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:31.361 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.361 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.361 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.361 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.361 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.361 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.620 00:16:31.620 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.620 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.620 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.879 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.879 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.879 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.879 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.879 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.879 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.879 { 00:16:31.879 "cntlid": 119, 00:16:31.879 "qid": 0, 00:16:31.879 "state": "enabled", 00:16:31.879 "thread": "nvmf_tgt_poll_group_000", 00:16:31.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.879 "listen_address": { 00:16:31.879 "trtype": "TCP", 00:16:31.879 "adrfam": "IPv4", 00:16:31.879 "traddr": "10.0.0.2", 00:16:31.879 "trsvcid": "4420" 00:16:31.879 }, 00:16:31.879 "peer_address": { 00:16:31.879 "trtype": "TCP", 00:16:31.879 "adrfam": "IPv4", 00:16:31.879 "traddr": "10.0.0.1", 00:16:31.879 "trsvcid": "38260" 00:16:31.879 }, 00:16:31.879 "auth": { 00:16:31.879 "state": "completed", 00:16:31.879 "digest": "sha512", 00:16:31.879 "dhgroup": "ffdhe3072" 00:16:31.879 } 00:16:31.879 } 00:16:31.879 ]' 00:16:31.879 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.879 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.879 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.879 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.879 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.879 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.879 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.879 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.137 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:32.137 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:32.706 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.706 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.706 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.706 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.706 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.706 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.706 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.706 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.706 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.965 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:32.965 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.965 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.965 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:32.965 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.965 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.965 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.965 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.965 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.965 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.965 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.965 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.965 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.223 00:16:33.223 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.223 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.223 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.482 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.482 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.482 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.482 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.482 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.482 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.482 { 00:16:33.482 "cntlid": 121, 00:16:33.482 "qid": 0, 00:16:33.482 "state": "enabled", 00:16:33.482 "thread": "nvmf_tgt_poll_group_000", 00:16:33.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:33.482 "listen_address": { 00:16:33.482 "trtype": "TCP", 00:16:33.482 "adrfam": "IPv4", 00:16:33.482 "traddr": "10.0.0.2", 00:16:33.482 "trsvcid": "4420" 00:16:33.482 }, 00:16:33.482 "peer_address": { 00:16:33.482 "trtype": "TCP", 00:16:33.482 "adrfam": "IPv4", 00:16:33.482 "traddr": "10.0.0.1", 00:16:33.482 "trsvcid": "38292" 00:16:33.482 }, 00:16:33.482 "auth": { 00:16:33.482 "state": "completed", 00:16:33.482 "digest": "sha512", 00:16:33.482 "dhgroup": "ffdhe4096" 00:16:33.482 } 00:16:33.482 } 00:16:33.482 ]' 00:16:33.482 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.482 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.482 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.482 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.482 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.741 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.741 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.741 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.741 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:33.741 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:34.310 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.310 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.310 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.310 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.310 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.310 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.310 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:34.310 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:34.569 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:34.569 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.569 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.569 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:34.569 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.569 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.569 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.569 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.569 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.569 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.569 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.569 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.569 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.829 00:16:34.829 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.829 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.829 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.088 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.088 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.088 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.088 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.088 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.088 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.088 { 00:16:35.088 "cntlid": 123, 00:16:35.088 "qid": 0, 00:16:35.088 "state": "enabled", 00:16:35.088 "thread": "nvmf_tgt_poll_group_000", 00:16:35.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.088 "listen_address": { 00:16:35.088 "trtype": "TCP", 00:16:35.088 "adrfam": "IPv4", 00:16:35.088 "traddr": "10.0.0.2", 00:16:35.088 "trsvcid": "4420" 00:16:35.088 }, 00:16:35.088 "peer_address": { 00:16:35.088 "trtype": "TCP", 00:16:35.088 "adrfam": "IPv4", 00:16:35.088 "traddr": "10.0.0.1", 00:16:35.088 "trsvcid": "38322" 00:16:35.088 }, 00:16:35.088 "auth": { 00:16:35.088 "state": "completed", 00:16:35.088 "digest": "sha512", 00:16:35.088 "dhgroup": "ffdhe4096" 00:16:35.088 } 00:16:35.088 } 00:16:35.088 ]' 00:16:35.088 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.088 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.089 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.348 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.348 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.348 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.348 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.348 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.607 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:35.607 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.177 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.436 00:16:36.696 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.696 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.696 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.696 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.696 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.696 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.697 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.697 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.697 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.697 { 00:16:36.697 "cntlid": 125, 00:16:36.697 "qid": 0, 00:16:36.697 "state": "enabled", 00:16:36.697 "thread": "nvmf_tgt_poll_group_000", 00:16:36.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.697 "listen_address": { 00:16:36.697 "trtype": "TCP", 00:16:36.697 "adrfam": "IPv4", 00:16:36.697 "traddr": "10.0.0.2", 00:16:36.697 "trsvcid": "4420" 00:16:36.697 }, 00:16:36.697 "peer_address": { 00:16:36.697 "trtype": "TCP", 00:16:36.697 "adrfam": "IPv4", 00:16:36.697 "traddr": "10.0.0.1", 00:16:36.697 "trsvcid": "45194" 00:16:36.697 }, 00:16:36.697 "auth": { 00:16:36.697 "state": "completed", 00:16:36.697 "digest": "sha512", 00:16:36.697 "dhgroup": "ffdhe4096" 00:16:36.697 } 00:16:36.697 } 00:16:36.697 ]' 00:16:36.697 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.697 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.697 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.957 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.957 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.957 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.957 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.957 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.216 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:37.216 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.785 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.045 00:16:38.305 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.305 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.305 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.305 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.305 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.305 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.305 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.305 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.305 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.305 { 00:16:38.305 "cntlid": 127, 00:16:38.305 "qid": 0, 00:16:38.305 "state": "enabled", 00:16:38.305 "thread": "nvmf_tgt_poll_group_000", 00:16:38.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.305 "listen_address": { 00:16:38.305 "trtype": "TCP", 00:16:38.305 "adrfam": "IPv4", 00:16:38.305 "traddr": "10.0.0.2", 00:16:38.305 "trsvcid": "4420" 00:16:38.305 }, 00:16:38.305 "peer_address": { 00:16:38.305 "trtype": "TCP", 00:16:38.305 "adrfam": "IPv4", 00:16:38.305 "traddr": "10.0.0.1", 00:16:38.305 "trsvcid": "45214" 00:16:38.305 }, 00:16:38.305 "auth": { 00:16:38.305 "state": "completed", 00:16:38.305 "digest": "sha512", 00:16:38.305 "dhgroup": "ffdhe4096" 00:16:38.305 } 00:16:38.305 } 00:16:38.305 ]' 00:16:38.305 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.305 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.565 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.565 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.565 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.565 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.565 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.565 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.825 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:38.825 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:39.394 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.394 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.394 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.394 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.394 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.394 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.394 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.394 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:39.394 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:39.394 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:39.394 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.394 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.394 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:39.394 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.394 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.394 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.394 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.394 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.394 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.394 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.394 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.394 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.963 00:16:39.963 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.963 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.963 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.963 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.963 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.963 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.963 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.963 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.963 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.963 { 00:16:39.963 "cntlid": 129, 00:16:39.963 "qid": 0, 00:16:39.963 "state": "enabled", 00:16:39.963 "thread": "nvmf_tgt_poll_group_000", 00:16:39.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:39.963 "listen_address": { 00:16:39.963 "trtype": "TCP", 00:16:39.963 "adrfam": "IPv4", 00:16:39.963 "traddr": "10.0.0.2", 00:16:39.963 "trsvcid": "4420" 00:16:39.963 }, 00:16:39.963 "peer_address": { 00:16:39.963 "trtype": "TCP", 00:16:39.963 "adrfam": "IPv4", 00:16:39.963 "traddr": "10.0.0.1", 00:16:39.963 "trsvcid": "45224" 00:16:39.963 }, 00:16:39.963 "auth": { 00:16:39.963 "state": "completed", 00:16:39.963 "digest": "sha512", 00:16:39.963 "dhgroup": "ffdhe6144" 00:16:39.963 } 00:16:39.963 } 00:16:39.963 ]' 00:16:39.963 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.222 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.222 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.222 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.222 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.222 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.222 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.222 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.481 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:40.481 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:41.050 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.050 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.050 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.050 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.050 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.050 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.050 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.050 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.310 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:41.310 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.310 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.310 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:41.310 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.310 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.310 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.310 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.310 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.310 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.310 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.310 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.310 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.578 00:16:41.578 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.578 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.578 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.837 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.837 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.837 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.837 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.837 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.837 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.837 { 00:16:41.837 "cntlid": 131, 00:16:41.837 "qid": 0, 00:16:41.837 "state": "enabled", 00:16:41.837 "thread": "nvmf_tgt_poll_group_000", 00:16:41.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:41.837 "listen_address": { 00:16:41.837 "trtype": "TCP", 00:16:41.837 "adrfam": "IPv4", 00:16:41.837 "traddr": "10.0.0.2", 00:16:41.837 "trsvcid": "4420" 00:16:41.837 }, 00:16:41.837 "peer_address": { 00:16:41.837 "trtype": "TCP", 00:16:41.837 "adrfam": "IPv4", 00:16:41.837 "traddr": "10.0.0.1", 00:16:41.837 "trsvcid": "45254" 00:16:41.837 }, 00:16:41.837 "auth": { 00:16:41.837 "state": "completed", 00:16:41.837 "digest": "sha512", 00:16:41.837 "dhgroup": "ffdhe6144" 00:16:41.837 } 00:16:41.837 } 00:16:41.837 ]' 00:16:41.837 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.837 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.837 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.837 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.837 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.837 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.837 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.837 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.095 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:42.095 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:42.663 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.663 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.664 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.664 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.664 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.664 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.664 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:42.664 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:42.923 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:42.923 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.923 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.923 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:42.923 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.923 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.923 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.923 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.923 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.923 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.923 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.923 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.923 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.183 00:16:43.443 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.443 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.443 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.443 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.443 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.443 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.443 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.443 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.443 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.443 { 00:16:43.443 "cntlid": 133, 00:16:43.443 "qid": 0, 00:16:43.443 "state": "enabled", 00:16:43.443 "thread": "nvmf_tgt_poll_group_000", 00:16:43.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.443 "listen_address": { 00:16:43.443 "trtype": "TCP", 00:16:43.443 "adrfam": "IPv4", 00:16:43.443 "traddr": "10.0.0.2", 00:16:43.443 "trsvcid": "4420" 00:16:43.443 }, 00:16:43.443 "peer_address": { 00:16:43.443 "trtype": "TCP", 00:16:43.443 "adrfam": "IPv4", 00:16:43.443 "traddr": "10.0.0.1", 00:16:43.443 "trsvcid": "45282" 00:16:43.444 }, 00:16:43.444 "auth": { 00:16:43.444 "state": "completed", 00:16:43.444 "digest": "sha512", 00:16:43.444 "dhgroup": "ffdhe6144" 00:16:43.444 } 00:16:43.444 } 00:16:43.444 ]' 00:16:43.444 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.444 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.444 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.702 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:43.702 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.702 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.702 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.702 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.703 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:43.703 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:44.278 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.278 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.278 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.278 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.539 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.539 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.539 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.539 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.539 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:44.539 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.539 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.539 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:44.540 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.540 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.540 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:44.540 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.540 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.540 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.540 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.540 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.540 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.110 00:16:45.110 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.110 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.110 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.110 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.110 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.110 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.110 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.110 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.110 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.110 { 00:16:45.110 "cntlid": 135, 00:16:45.110 "qid": 0, 00:16:45.110 "state": "enabled", 00:16:45.110 "thread": "nvmf_tgt_poll_group_000", 00:16:45.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.110 "listen_address": { 00:16:45.110 "trtype": "TCP", 00:16:45.110 "adrfam": "IPv4", 00:16:45.110 "traddr": "10.0.0.2", 00:16:45.110 "trsvcid": "4420" 00:16:45.110 }, 00:16:45.110 "peer_address": { 00:16:45.110 "trtype": "TCP", 00:16:45.110 "adrfam": "IPv4", 00:16:45.110 "traddr": "10.0.0.1", 00:16:45.110 "trsvcid": "45326" 00:16:45.110 }, 00:16:45.110 "auth": { 00:16:45.110 "state": "completed", 00:16:45.110 "digest": "sha512", 00:16:45.110 "dhgroup": "ffdhe6144" 00:16:45.110 } 00:16:45.110 } 00:16:45.110 ]' 00:16:45.110 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.110 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.110 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.370 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.370 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.370 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.370 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.370 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.370 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:45.370 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:45.938 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.938 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.938 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.938 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.938 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.938 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.938 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.938 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.938 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.197 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:46.197 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.197 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.197 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:46.197 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.197 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.197 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.197 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.197 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.197 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.197 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.197 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.197 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.766 00:16:46.766 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.766 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.766 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.025 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.025 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.025 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.025 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.025 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.026 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.026 { 00:16:47.026 "cntlid": 137, 00:16:47.026 "qid": 0, 00:16:47.026 "state": "enabled", 00:16:47.026 "thread": "nvmf_tgt_poll_group_000", 00:16:47.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:47.026 "listen_address": { 00:16:47.026 "trtype": "TCP", 00:16:47.026 "adrfam": "IPv4", 00:16:47.026 "traddr": "10.0.0.2", 00:16:47.026 "trsvcid": "4420" 00:16:47.026 }, 00:16:47.026 "peer_address": { 00:16:47.026 "trtype": "TCP", 00:16:47.026 "adrfam": "IPv4", 00:16:47.026 "traddr": "10.0.0.1", 00:16:47.026 "trsvcid": "52466" 00:16:47.026 }, 00:16:47.026 "auth": { 00:16:47.026 "state": "completed", 00:16:47.026 "digest": "sha512", 00:16:47.026 "dhgroup": "ffdhe8192" 00:16:47.026 } 00:16:47.026 } 00:16:47.026 ]' 00:16:47.026 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.026 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.026 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.026 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.026 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.026 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.026 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.026 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.285 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:47.285 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:47.854 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.854 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.854 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.854 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.854 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.854 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.854 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:47.854 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.114 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:48.114 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.114 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:48.114 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.114 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.114 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.114 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.114 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.114 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.114 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.114 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.114 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.114 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.684 00:16:48.684 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.684 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.684 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.684 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.684 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.684 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.684 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.944 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.944 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.944 { 00:16:48.944 "cntlid": 139, 00:16:48.944 "qid": 0, 00:16:48.944 "state": "enabled", 00:16:48.944 "thread": "nvmf_tgt_poll_group_000", 00:16:48.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.944 "listen_address": { 00:16:48.944 "trtype": "TCP", 00:16:48.944 "adrfam": "IPv4", 00:16:48.944 "traddr": "10.0.0.2", 00:16:48.944 "trsvcid": "4420" 00:16:48.944 }, 00:16:48.944 "peer_address": { 00:16:48.944 "trtype": "TCP", 00:16:48.944 "adrfam": "IPv4", 00:16:48.944 "traddr": "10.0.0.1", 00:16:48.944 "trsvcid": "52502" 00:16:48.944 }, 00:16:48.944 "auth": { 00:16:48.944 "state": "completed", 00:16:48.944 "digest": "sha512", 00:16:48.944 "dhgroup": "ffdhe8192" 00:16:48.944 } 00:16:48.944 } 00:16:48.944 ]' 00:16:48.944 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.944 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.944 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.944 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.944 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.944 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.944 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.944 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.203 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:49.203 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: --dhchap-ctrl-secret DHHC-1:02:M2NjZTQ1NGRmMThlYzQxMDJjYjI0NjgwOTJjMmVhYjYzNWFiN2E4NTA1ZDFmZDI5icju7A==: 00:16:49.772 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.772 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.772 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.773 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.773 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.773 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.773 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.773 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.032 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:50.032 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.032 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.032 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.032 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.032 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.032 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.032 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.032 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.032 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.032 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.032 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.032 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.602 00:16:50.602 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.602 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.602 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.602 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.602 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.602 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.602 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.602 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.602 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.602 { 00:16:50.602 "cntlid": 141, 00:16:50.602 "qid": 0, 00:16:50.602 "state": "enabled", 00:16:50.602 "thread": "nvmf_tgt_poll_group_000", 00:16:50.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:50.602 "listen_address": { 00:16:50.602 "trtype": "TCP", 00:16:50.602 "adrfam": "IPv4", 00:16:50.602 "traddr": "10.0.0.2", 00:16:50.602 "trsvcid": "4420" 00:16:50.602 }, 00:16:50.602 "peer_address": { 00:16:50.602 "trtype": "TCP", 00:16:50.602 "adrfam": "IPv4", 00:16:50.602 "traddr": "10.0.0.1", 00:16:50.602 "trsvcid": "52528" 00:16:50.602 }, 00:16:50.602 "auth": { 00:16:50.602 "state": "completed", 00:16:50.602 "digest": "sha512", 00:16:50.602 "dhgroup": "ffdhe8192" 00:16:50.602 } 00:16:50.602 } 00:16:50.602 ]' 00:16:50.602 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.602 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.602 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.862 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.862 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.862 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.862 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.862 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.121 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:51.121 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:01:MjU4Mjk0OTU1ZDQ0YjU3NGVlNGI5YTZmN2Y2ZjBmOWSVmjhq: 00:16:51.690 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.690 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.690 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.690 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.690 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.690 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.691 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.691 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.691 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:51.691 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.691 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.691 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:51.691 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.691 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.691 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:51.691 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.691 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.950 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.950 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.950 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.950 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.210 00:16:52.210 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.210 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.210 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.469 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.469 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.469 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.469 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.469 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.469 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.469 { 00:16:52.469 "cntlid": 143, 00:16:52.469 "qid": 0, 00:16:52.469 "state": "enabled", 00:16:52.469 "thread": "nvmf_tgt_poll_group_000", 00:16:52.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.469 "listen_address": { 00:16:52.469 "trtype": "TCP", 00:16:52.469 "adrfam": "IPv4", 00:16:52.469 "traddr": "10.0.0.2", 00:16:52.469 "trsvcid": "4420" 00:16:52.469 }, 00:16:52.469 "peer_address": { 00:16:52.469 "trtype": "TCP", 00:16:52.469 "adrfam": "IPv4", 00:16:52.469 "traddr": "10.0.0.1", 00:16:52.469 "trsvcid": "52540" 00:16:52.469 }, 00:16:52.469 "auth": { 00:16:52.469 "state": "completed", 00:16:52.469 "digest": "sha512", 00:16:52.469 "dhgroup": "ffdhe8192" 00:16:52.469 } 00:16:52.469 } 00:16:52.469 ]' 00:16:52.469 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.469 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.469 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.728 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.728 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.728 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.728 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.728 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.987 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:52.987 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:53.555 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.555 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.124 00:16:54.124 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.124 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.124 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.384 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.384 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.384 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.384 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.384 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.384 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.384 { 00:16:54.384 "cntlid": 145, 00:16:54.384 "qid": 0, 00:16:54.384 "state": "enabled", 00:16:54.384 "thread": "nvmf_tgt_poll_group_000", 00:16:54.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:54.384 "listen_address": { 00:16:54.384 "trtype": "TCP", 00:16:54.384 "adrfam": "IPv4", 00:16:54.384 "traddr": "10.0.0.2", 00:16:54.384 "trsvcid": "4420" 00:16:54.384 }, 00:16:54.384 "peer_address": { 00:16:54.384 "trtype": "TCP", 00:16:54.384 "adrfam": "IPv4", 00:16:54.384 "traddr": "10.0.0.1", 00:16:54.384 "trsvcid": "52574" 00:16:54.384 }, 00:16:54.384 "auth": { 00:16:54.384 "state": "completed", 00:16:54.384 "digest": "sha512", 00:16:54.384 "dhgroup": "ffdhe8192" 00:16:54.384 } 00:16:54.384 } 00:16:54.384 ]' 00:16:54.384 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.384 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.384 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.384 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.384 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.384 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.384 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.384 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.644 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:54.644 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDlmZDgxNGY0Y2Q3OGIyYmUzMmVhOGJiNWE0ZDFjOGNkYmZjNzFhYWMzNWQ3NzFh4OUQGg==: --dhchap-ctrl-secret DHHC-1:03:ODMwNDA4ZjM3OGJlZDIzOGJlNDdmNjc5MzcyMmEwMTk3NzgwODNkM2U3NTNlMWQ3YmQ0OTY5MDdlZTJjMjY1NHsIsdM=: 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:55.213 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:55.783 request: 00:16:55.783 { 00:16:55.783 "name": "nvme0", 00:16:55.783 "trtype": "tcp", 00:16:55.783 "traddr": "10.0.0.2", 00:16:55.783 "adrfam": "ipv4", 00:16:55.783 "trsvcid": "4420", 00:16:55.783 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:55.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.783 "prchk_reftag": false, 00:16:55.783 "prchk_guard": false, 00:16:55.783 "hdgst": false, 00:16:55.783 "ddgst": false, 00:16:55.783 "dhchap_key": "key2", 00:16:55.783 "allow_unrecognized_csi": false, 00:16:55.783 "method": "bdev_nvme_attach_controller", 00:16:55.783 "req_id": 1 00:16:55.783 } 00:16:55.783 Got JSON-RPC error response 00:16:55.783 response: 00:16:55.783 { 00:16:55.783 "code": -5, 00:16:55.783 "message": "Input/output error" 00:16:55.783 } 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:55.783 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:56.352 request: 00:16:56.352 { 00:16:56.352 "name": "nvme0", 00:16:56.352 "trtype": "tcp", 00:16:56.352 "traddr": "10.0.0.2", 00:16:56.352 "adrfam": "ipv4", 00:16:56.352 "trsvcid": "4420", 00:16:56.352 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:56.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.352 "prchk_reftag": false, 00:16:56.352 "prchk_guard": false, 00:16:56.352 "hdgst": false, 00:16:56.352 "ddgst": false, 00:16:56.352 "dhchap_key": "key1", 00:16:56.352 "dhchap_ctrlr_key": "ckey2", 00:16:56.352 "allow_unrecognized_csi": false, 00:16:56.353 "method": "bdev_nvme_attach_controller", 00:16:56.353 "req_id": 1 00:16:56.353 } 00:16:56.353 Got JSON-RPC error response 00:16:56.353 response: 00:16:56.353 { 00:16:56.353 "code": -5, 00:16:56.353 "message": "Input/output error" 00:16:56.353 } 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.353 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.613 request: 00:16:56.613 { 00:16:56.613 "name": "nvme0", 00:16:56.613 "trtype": "tcp", 00:16:56.613 "traddr": "10.0.0.2", 00:16:56.613 "adrfam": "ipv4", 00:16:56.613 "trsvcid": "4420", 00:16:56.613 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:56.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.613 "prchk_reftag": false, 00:16:56.613 "prchk_guard": false, 00:16:56.613 "hdgst": false, 00:16:56.613 "ddgst": false, 00:16:56.613 "dhchap_key": "key1", 00:16:56.613 "dhchap_ctrlr_key": "ckey1", 00:16:56.613 "allow_unrecognized_csi": false, 00:16:56.613 "method": "bdev_nvme_attach_controller", 00:16:56.613 "req_id": 1 00:16:56.613 } 00:16:56.613 Got JSON-RPC error response 00:16:56.613 response: 00:16:56.613 { 00:16:56.613 "code": -5, 00:16:56.613 "message": "Input/output error" 00:16:56.613 } 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2303306 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2303306 ']' 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2303306 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2303306 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2303306' 00:16:56.873 killing process with pid 2303306 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2303306 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2303306 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2325111 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2325111 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2325111 ']' 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:56.873 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.133 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:57.133 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:57.133 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:57.133 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:57.133 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.133 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.133 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:57.133 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2325111 00:16:57.133 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2325111 ']' 00:16:57.133 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.133 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:57.133 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.133 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:57.133 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.392 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:57.392 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:57.392 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:57.392 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.392 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.653 null0 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IiK 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.pJM ]] 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pJM 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.WqN 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.W9L ]] 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.W9L 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4A0 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.19E ]] 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.19E 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lh1 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.653 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.591 nvme0n1 00:16:58.591 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.591 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.591 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.591 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.591 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.591 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.591 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.591 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.591 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.591 { 00:16:58.591 "cntlid": 1, 00:16:58.591 "qid": 0, 00:16:58.591 "state": "enabled", 00:16:58.591 "thread": "nvmf_tgt_poll_group_000", 00:16:58.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.591 "listen_address": { 00:16:58.591 "trtype": "TCP", 00:16:58.591 "adrfam": "IPv4", 00:16:58.591 "traddr": "10.0.0.2", 00:16:58.591 "trsvcid": "4420" 00:16:58.591 }, 00:16:58.591 "peer_address": { 00:16:58.591 "trtype": "TCP", 00:16:58.591 "adrfam": "IPv4", 00:16:58.591 "traddr": "10.0.0.1", 00:16:58.591 "trsvcid": "49104" 00:16:58.591 }, 00:16:58.591 "auth": { 00:16:58.591 "state": "completed", 00:16:58.591 "digest": "sha512", 00:16:58.591 "dhgroup": "ffdhe8192" 00:16:58.591 } 00:16:58.591 } 00:16:58.591 ]' 00:16:58.591 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.591 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.591 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.591 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.591 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.850 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.850 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.850 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.850 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:58.850 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:16:59.418 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.418 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.418 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.418 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.418 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.418 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:59.418 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.418 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.678 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.678 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:59.678 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:59.678 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:59.678 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:59.678 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:59.678 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:59.678 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.678 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:59.678 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.678 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.678 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.678 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.937 request: 00:16:59.937 { 00:16:59.937 "name": "nvme0", 00:16:59.937 "trtype": "tcp", 00:16:59.937 "traddr": "10.0.0.2", 00:16:59.937 "adrfam": "ipv4", 00:16:59.937 "trsvcid": "4420", 00:16:59.937 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:59.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:59.937 "prchk_reftag": false, 00:16:59.937 "prchk_guard": false, 00:16:59.937 "hdgst": false, 00:16:59.937 "ddgst": false, 00:16:59.937 "dhchap_key": "key3", 00:16:59.937 "allow_unrecognized_csi": false, 00:16:59.937 "method": "bdev_nvme_attach_controller", 00:16:59.937 "req_id": 1 00:16:59.937 } 00:16:59.937 Got JSON-RPC error response 00:16:59.937 response: 00:16:59.937 { 00:16:59.937 "code": -5, 00:16:59.937 "message": "Input/output error" 00:16:59.937 } 00:16:59.937 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:59.937 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:59.937 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:59.937 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:59.937 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:59.937 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:59.937 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:59.937 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:00.196 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:00.196 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:00.196 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:00.196 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:00.196 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.196 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:00.196 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.196 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.196 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.196 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.454 request: 00:17:00.454 { 00:17:00.454 "name": "nvme0", 00:17:00.454 "trtype": "tcp", 00:17:00.454 "traddr": "10.0.0.2", 00:17:00.454 "adrfam": "ipv4", 00:17:00.454 "trsvcid": "4420", 00:17:00.454 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:00.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.454 "prchk_reftag": false, 00:17:00.454 "prchk_guard": false, 00:17:00.454 "hdgst": false, 00:17:00.454 "ddgst": false, 00:17:00.454 "dhchap_key": "key3", 00:17:00.454 "allow_unrecognized_csi": false, 00:17:00.454 "method": "bdev_nvme_attach_controller", 00:17:00.454 "req_id": 1 00:17:00.454 } 00:17:00.454 Got JSON-RPC error response 00:17:00.454 response: 00:17:00.454 { 00:17:00.454 "code": -5, 00:17:00.454 "message": "Input/output error" 00:17:00.454 } 00:17:00.454 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:00.454 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:00.454 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:00.454 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:00.454 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:00.454 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:00.454 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:00.454 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:00.454 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:00.454 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:00.454 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:00.712 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:00.971 request: 00:17:00.971 { 00:17:00.971 "name": "nvme0", 00:17:00.971 "trtype": "tcp", 00:17:00.971 "traddr": "10.0.0.2", 00:17:00.971 "adrfam": "ipv4", 00:17:00.971 "trsvcid": "4420", 00:17:00.971 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:00.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.971 "prchk_reftag": false, 00:17:00.971 "prchk_guard": false, 00:17:00.971 "hdgst": false, 00:17:00.971 "ddgst": false, 00:17:00.971 "dhchap_key": "key0", 00:17:00.971 "dhchap_ctrlr_key": "key1", 00:17:00.971 "allow_unrecognized_csi": false, 00:17:00.971 "method": "bdev_nvme_attach_controller", 00:17:00.971 "req_id": 1 00:17:00.971 } 00:17:00.971 Got JSON-RPC error response 00:17:00.971 response: 00:17:00.971 { 00:17:00.971 "code": -5, 00:17:00.971 "message": "Input/output error" 00:17:00.971 } 00:17:00.971 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:00.971 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:00.971 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:00.971 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:00.971 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:00.971 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:00.971 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:01.230 nvme0n1 00:17:01.230 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:01.230 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:01.230 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.490 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.490 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.490 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.748 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:01.748 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.748 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.748 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.748 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:01.748 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:01.748 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:02.316 nvme0n1 00:17:02.316 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:02.316 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:02.316 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.575 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.575 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:02.575 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.575 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.575 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.575 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:02.575 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:02.575 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.834 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.834 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:17:02.834 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: --dhchap-ctrl-secret DHHC-1:03:OTk0YjM1MjcxMDBiOWViNTI1MjY2YjBmMjgwMGE4YTIxZmYwMTJlYzM1NjkyOGU2ZmI5OWVhZDljYTMxNGMxMiw9jEE=: 00:17:03.400 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:03.400 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:03.400 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:03.400 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:03.400 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:03.400 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:03.400 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:03.400 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.400 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.660 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:03.660 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:03.660 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:03.660 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:03.660 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.660 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:03.660 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.660 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:03.660 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:03.660 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:03.919 request: 00:17:03.919 { 00:17:03.919 "name": "nvme0", 00:17:03.919 "trtype": "tcp", 00:17:03.919 "traddr": "10.0.0.2", 00:17:03.919 "adrfam": "ipv4", 00:17:03.919 "trsvcid": "4420", 00:17:03.919 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:03.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:03.919 "prchk_reftag": false, 00:17:03.919 "prchk_guard": false, 00:17:03.919 "hdgst": false, 00:17:03.919 "ddgst": false, 00:17:03.919 "dhchap_key": "key1", 00:17:03.919 "allow_unrecognized_csi": false, 00:17:03.919 "method": "bdev_nvme_attach_controller", 00:17:03.919 "req_id": 1 00:17:03.919 } 00:17:03.919 Got JSON-RPC error response 00:17:03.919 response: 00:17:03.919 { 00:17:03.919 "code": -5, 00:17:03.919 "message": "Input/output error" 00:17:03.919 } 00:17:04.179 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:04.179 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:04.179 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:04.179 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:04.179 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:04.179 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:04.179 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:04.748 nvme0n1 00:17:04.748 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:04.748 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:04.748 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.007 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.007 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.007 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.267 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.267 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.267 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.267 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.267 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:05.267 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:05.267 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:05.526 nvme0n1 00:17:05.526 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:05.526 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:05.526 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.785 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.785 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.785 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.785 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:05.785 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.785 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.785 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.785 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: '' 2s 00:17:05.785 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:05.785 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:05.785 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: 00:17:05.785 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:05.785 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:05.785 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:06.043 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: ]] 00:17:06.043 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZTFkM2RhNDdlNTEyZWM0OTk0MjViZTNkMTA3NGJmYzbOrd8v: 00:17:06.043 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:06.043 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:06.043 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: 2s 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: ]] 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTEzNDVkNmUyNmM3OGEwY2U1YzlkOTNlYTM5NDNlODlkN2FlZjkzZDRjOGUwOGIx32Cxcw==: 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:07.947 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:09.851 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:09.851 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:09.851 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:09.851 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:10.110 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:10.110 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:10.110 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:10.110 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.110 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:10.110 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.110 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.110 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.110 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:10.110 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:10.110 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:10.678 nvme0n1 00:17:10.938 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:10.938 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.938 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.938 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.938 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:10.938 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:11.197 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:11.197 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:11.197 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.457 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.457 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.457 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.457 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.457 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.457 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:11.457 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:11.716 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:11.716 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:11.716 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.975 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.975 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:11.975 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.975 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.975 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.975 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:11.975 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:11.975 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:11.975 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:11.975 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.975 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:11.975 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.975 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:11.975 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:12.234 request: 00:17:12.234 { 00:17:12.234 "name": "nvme0", 00:17:12.234 "dhchap_key": "key1", 00:17:12.234 "dhchap_ctrlr_key": "key3", 00:17:12.234 "method": "bdev_nvme_set_keys", 00:17:12.234 "req_id": 1 00:17:12.234 } 00:17:12.234 Got JSON-RPC error response 00:17:12.234 response: 00:17:12.234 { 00:17:12.234 "code": -13, 00:17:12.234 "message": "Permission denied" 00:17:12.234 } 00:17:12.493 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:12.493 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:12.493 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:12.493 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:12.493 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:12.493 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:12.493 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.493 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:12.493 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:13.871 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:13.871 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:13.871 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.871 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:13.871 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:13.871 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.871 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.871 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.872 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:13.872 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:13.872 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:14.440 nvme0n1 00:17:14.440 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:14.440 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.700 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.700 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.700 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:14.700 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:14.700 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:14.700 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:14.700 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.700 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:14.700 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.700 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:14.700 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:14.960 request: 00:17:14.960 { 00:17:14.960 "name": "nvme0", 00:17:14.960 "dhchap_key": "key2", 00:17:14.960 "dhchap_ctrlr_key": "key0", 00:17:14.960 "method": "bdev_nvme_set_keys", 00:17:14.960 "req_id": 1 00:17:14.960 } 00:17:14.960 Got JSON-RPC error response 00:17:14.960 response: 00:17:14.960 { 00:17:14.960 "code": -13, 00:17:14.960 "message": "Permission denied" 00:17:14.960 } 00:17:14.960 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:14.960 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:14.960 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:14.960 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:14.960 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:14.960 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:14.960 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.226 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:15.226 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:16.165 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:16.165 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:16.165 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.424 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:16.424 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:16.424 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:16.424 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2303416 00:17:16.424 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2303416 ']' 00:17:16.424 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2303416 00:17:16.424 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:16.424 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:16.424 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2303416 00:17:16.424 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:16.424 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:16.424 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2303416' 00:17:16.424 killing process with pid 2303416 00:17:16.424 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2303416 00:17:16.424 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2303416 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:16.992 rmmod nvme_tcp 00:17:16.992 rmmod nvme_fabrics 00:17:16.992 rmmod nvme_keyring 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2325111 ']' 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2325111 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2325111 ']' 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2325111 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2325111 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2325111' 00:17:16.992 killing process with pid 2325111 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2325111 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2325111 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:16.992 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:17.251 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:17.251 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:17.251 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:17.251 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:17.251 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.251 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.251 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.157 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:19.157 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.IiK /tmp/spdk.key-sha256.WqN /tmp/spdk.key-sha384.4A0 /tmp/spdk.key-sha512.lh1 /tmp/spdk.key-sha512.pJM /tmp/spdk.key-sha384.W9L /tmp/spdk.key-sha256.19E '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:19.157 00:17:19.157 real 2m34.028s 00:17:19.157 user 5m55.253s 00:17:19.157 sys 0m24.413s 00:17:19.157 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:19.157 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.157 ************************************ 00:17:19.157 END TEST nvmf_auth_target 00:17:19.157 ************************************ 00:17:19.157 13:00:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:19.157 13:00:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:19.157 13:00:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:19.157 13:00:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:19.157 13:00:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:19.157 ************************************ 00:17:19.157 START TEST nvmf_bdevio_no_huge 00:17:19.157 ************************************ 00:17:19.157 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:19.416 * Looking for test storage... 00:17:19.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:19.416 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:19.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.416 --rc genhtml_branch_coverage=1 00:17:19.416 --rc genhtml_function_coverage=1 00:17:19.416 --rc genhtml_legend=1 00:17:19.416 --rc geninfo_all_blocks=1 00:17:19.416 --rc geninfo_unexecuted_blocks=1 00:17:19.416 00:17:19.416 ' 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:19.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.416 --rc genhtml_branch_coverage=1 00:17:19.416 --rc genhtml_function_coverage=1 00:17:19.416 --rc genhtml_legend=1 00:17:19.416 --rc geninfo_all_blocks=1 00:17:19.416 --rc geninfo_unexecuted_blocks=1 00:17:19.416 00:17:19.416 ' 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:19.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.416 --rc genhtml_branch_coverage=1 00:17:19.416 --rc genhtml_function_coverage=1 00:17:19.416 --rc genhtml_legend=1 00:17:19.416 --rc geninfo_all_blocks=1 00:17:19.416 --rc geninfo_unexecuted_blocks=1 00:17:19.416 00:17:19.416 ' 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:19.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.416 --rc genhtml_branch_coverage=1 00:17:19.416 --rc genhtml_function_coverage=1 00:17:19.416 --rc genhtml_legend=1 00:17:19.416 --rc geninfo_all_blocks=1 00:17:19.416 --rc geninfo_unexecuted_blocks=1 00:17:19.416 00:17:19.416 ' 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:19.416 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:25.988 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:25.988 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:25.989 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:25.989 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:25.989 Found net devices under 0000:86:00.0: cvl_0_0 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:25.989 Found net devices under 0000:86:00.1: cvl_0_1 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:25.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:17:25.989 00:17:25.989 --- 10.0.0.2 ping statistics --- 00:17:25.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.989 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:17:25.989 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:25.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:17:25.989 00:17:25.990 --- 10.0.0.1 ping statistics --- 00:17:25.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.990 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:17:25.990 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.990 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:25.990 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:25.990 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.990 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:25.990 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:25.990 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.990 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:25.990 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:25.990 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:25.990 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:25.990 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:25.990 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:25.990 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2332509 00:17:25.990 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:25.990 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2332509 00:17:25.990 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 2332509 ']' 00:17:25.990 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.990 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:25.990 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.990 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:25.990 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:25.990 [2024-11-18 13:00:23.064868] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:17:25.990 [2024-11-18 13:00:23.064919] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:25.990 [2024-11-18 13:00:23.150735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:25.990 [2024-11-18 13:00:23.198065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.990 [2024-11-18 13:00:23.198103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.990 [2024-11-18 13:00:23.198110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.990 [2024-11-18 13:00:23.198116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.990 [2024-11-18 13:00:23.198121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.990 [2024-11-18 13:00:23.199331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:25.990 [2024-11-18 13:00:23.199448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:25.990 [2024-11-18 13:00:23.199571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:25.990 [2024-11-18 13:00:23.199571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:26.250 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:26.250 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:17:26.250 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:26.250 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:26.250 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.509 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.509 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:26.509 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.510 [2024-11-18 13:00:23.955032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.510 Malloc0 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.510 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.510 [2024-11-18 13:00:23.999319] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.510 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.510 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:26.510 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:26.510 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:26.510 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:26.510 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:26.510 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:26.510 { 00:17:26.510 "params": { 00:17:26.510 "name": "Nvme$subsystem", 00:17:26.510 "trtype": "$TEST_TRANSPORT", 00:17:26.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.510 "adrfam": "ipv4", 00:17:26.510 "trsvcid": "$NVMF_PORT", 00:17:26.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.510 "hdgst": ${hdgst:-false}, 00:17:26.510 "ddgst": ${ddgst:-false} 00:17:26.510 }, 00:17:26.510 "method": "bdev_nvme_attach_controller" 00:17:26.510 } 00:17:26.510 EOF 00:17:26.510 )") 00:17:26.510 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:26.510 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:26.510 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:26.510 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:26.510 "params": { 00:17:26.510 "name": "Nvme1", 00:17:26.510 "trtype": "tcp", 00:17:26.510 "traddr": "10.0.0.2", 00:17:26.510 "adrfam": "ipv4", 00:17:26.510 "trsvcid": "4420", 00:17:26.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:26.510 "hdgst": false, 00:17:26.510 "ddgst": false 00:17:26.510 }, 00:17:26.510 "method": "bdev_nvme_attach_controller" 00:17:26.510 }' 00:17:26.510 [2024-11-18 13:00:24.051130] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:17:26.510 [2024-11-18 13:00:24.051173] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2332678 ] 00:17:26.510 [2024-11-18 13:00:24.130788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:26.510 [2024-11-18 13:00:24.180258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.510 [2024-11-18 13:00:24.180381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.510 [2024-11-18 13:00:24.180380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.079 I/O targets: 00:17:27.079 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:27.079 00:17:27.079 00:17:27.079 CUnit - A unit testing framework for C - Version 2.1-3 00:17:27.079 http://cunit.sourceforge.net/ 00:17:27.079 00:17:27.079 00:17:27.079 Suite: bdevio tests on: Nvme1n1 00:17:27.079 Test: blockdev write read block ...passed 00:17:27.079 Test: blockdev write zeroes read block ...passed 00:17:27.079 Test: blockdev write zeroes read no split ...passed 00:17:27.079 Test: blockdev write zeroes read split ...passed 00:17:27.079 Test: blockdev write zeroes read split partial ...passed 00:17:27.079 Test: blockdev reset ...[2024-11-18 13:00:24.592122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:27.079 [2024-11-18 13:00:24.592181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c1920 (9): Bad file descriptor 00:17:27.079 [2024-11-18 13:00:24.647601] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:27.079 passed 00:17:27.079 Test: blockdev write read 8 blocks ...passed 00:17:27.079 Test: blockdev write read size > 128k ...passed 00:17:27.079 Test: blockdev write read invalid size ...passed 00:17:27.079 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:27.079 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:27.079 Test: blockdev write read max offset ...passed 00:17:27.338 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:27.338 Test: blockdev writev readv 8 blocks ...passed 00:17:27.338 Test: blockdev writev readv 30 x 1block ...passed 00:17:27.338 Test: blockdev writev readv block ...passed 00:17:27.338 Test: blockdev writev readv size > 128k ...passed 00:17:27.338 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:27.338 Test: blockdev comparev and writev ...[2024-11-18 13:00:24.902114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.338 [2024-11-18 13:00:24.902142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:27.338 [2024-11-18 13:00:24.902157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.338 [2024-11-18 13:00:24.902165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:27.338 [2024-11-18 13:00:24.902397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.338 [2024-11-18 13:00:24.902408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:27.338 [2024-11-18 13:00:24.902421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.338 [2024-11-18 13:00:24.902428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:27.338 [2024-11-18 13:00:24.902660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.338 [2024-11-18 13:00:24.902670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:27.338 [2024-11-18 13:00:24.902685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.338 [2024-11-18 13:00:24.902693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:27.338 [2024-11-18 13:00:24.902917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.338 [2024-11-18 13:00:24.902927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:27.338 [2024-11-18 13:00:24.902939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.338 [2024-11-18 13:00:24.902946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:27.338 passed 00:17:27.339 Test: blockdev nvme passthru rw ...passed 00:17:27.339 Test: blockdev nvme passthru vendor specific ...[2024-11-18 13:00:24.985743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.339 [2024-11-18 13:00:24.985759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:27.339 [2024-11-18 13:00:24.985866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.339 [2024-11-18 13:00:24.985875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:27.339 [2024-11-18 13:00:24.985980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.339 [2024-11-18 13:00:24.985989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:27.339 [2024-11-18 13:00:24.986092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.339 [2024-11-18 13:00:24.986104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:27.339 passed 00:17:27.339 Test: blockdev nvme admin passthru ...passed 00:17:27.598 Test: blockdev copy ...passed 00:17:27.598 00:17:27.598 Run Summary: Type Total Ran Passed Failed Inactive 00:17:27.598 suites 1 1 n/a 0 0 00:17:27.598 tests 23 23 23 0 0 00:17:27.598 asserts 152 152 152 0 n/a 00:17:27.598 00:17:27.598 Elapsed time = 1.163 seconds 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:27.857 rmmod nvme_tcp 00:17:27.857 rmmod nvme_fabrics 00:17:27.857 rmmod nvme_keyring 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2332509 ']' 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2332509 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 2332509 ']' 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 2332509 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2332509 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2332509' 00:17:27.857 killing process with pid 2332509 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 2332509 00:17:27.857 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 2332509 00:17:28.117 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:28.117 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:28.117 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:28.117 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:28.117 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:28.117 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:28.117 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:28.117 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:28.117 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:28.117 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.117 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.117 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.654 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:30.654 00:17:30.654 real 0m10.988s 00:17:30.654 user 0m14.219s 00:17:30.654 sys 0m5.405s 00:17:30.654 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:30.654 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:30.654 ************************************ 00:17:30.654 END TEST nvmf_bdevio_no_huge 00:17:30.654 ************************************ 00:17:30.654 13:00:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:30.654 13:00:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:30.654 13:00:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:30.654 13:00:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:30.654 ************************************ 00:17:30.654 START TEST nvmf_tls 00:17:30.654 ************************************ 00:17:30.654 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:30.654 * Looking for test storage... 00:17:30.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:30.654 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:30.654 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:17:30.654 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:30.654 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:30.654 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.654 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.654 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.654 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:30.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.655 --rc genhtml_branch_coverage=1 00:17:30.655 --rc genhtml_function_coverage=1 00:17:30.655 --rc genhtml_legend=1 00:17:30.655 --rc geninfo_all_blocks=1 00:17:30.655 --rc geninfo_unexecuted_blocks=1 00:17:30.655 00:17:30.655 ' 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:30.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.655 --rc genhtml_branch_coverage=1 00:17:30.655 --rc genhtml_function_coverage=1 00:17:30.655 --rc genhtml_legend=1 00:17:30.655 --rc geninfo_all_blocks=1 00:17:30.655 --rc geninfo_unexecuted_blocks=1 00:17:30.655 00:17:30.655 ' 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:30.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.655 --rc genhtml_branch_coverage=1 00:17:30.655 --rc genhtml_function_coverage=1 00:17:30.655 --rc genhtml_legend=1 00:17:30.655 --rc geninfo_all_blocks=1 00:17:30.655 --rc geninfo_unexecuted_blocks=1 00:17:30.655 00:17:30.655 ' 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:30.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.655 --rc genhtml_branch_coverage=1 00:17:30.655 --rc genhtml_function_coverage=1 00:17:30.655 --rc genhtml_legend=1 00:17:30.655 --rc geninfo_all_blocks=1 00:17:30.655 --rc geninfo_unexecuted_blocks=1 00:17:30.655 00:17:30.655 ' 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:30.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:30.655 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:37.246 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:37.246 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:37.246 Found net devices under 0000:86:00.0: cvl_0_0 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:37.246 Found net devices under 0000:86:00.1: cvl_0_1 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.246 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:37.247 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:37.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:17:37.247 00:17:37.247 --- 10.0.0.2 ping statistics --- 00:17:37.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.247 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:17:37.247 00:17:37.247 --- 10.0.0.1 ping statistics --- 00:17:37.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.247 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2336439 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2336439 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2336439 ']' 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.247 [2024-11-18 13:00:34.126035] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:17:37.247 [2024-11-18 13:00:34.126085] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.247 [2024-11-18 13:00:34.208698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.247 [2024-11-18 13:00:34.249613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.247 [2024-11-18 13:00:34.249651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.247 [2024-11-18 13:00:34.249658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.247 [2024-11-18 13:00:34.249664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.247 [2024-11-18 13:00:34.249669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.247 [2024-11-18 13:00:34.250220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:37.247 true 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:37.247 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:37.507 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:37.507 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:37.507 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:37.767 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:37.767 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:38.025 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:38.025 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:38.025 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:38.025 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:38.025 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:38.025 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:38.025 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:38.285 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:38.285 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:38.544 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:38.544 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:38.544 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:38.544 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:38.544 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.OgqzLQiEoV 00:17:38.809 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:39.111 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.gtTo1Oxmtq 00:17:39.111 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:39.111 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:39.111 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.OgqzLQiEoV 00:17:39.111 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.gtTo1Oxmtq 00:17:39.111 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:39.111 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:39.394 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.OgqzLQiEoV 00:17:39.394 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OgqzLQiEoV 00:17:39.394 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:39.721 [2024-11-18 13:00:37.133020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.721 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:39.721 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:40.004 [2024-11-18 13:00:37.497973] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.004 [2024-11-18 13:00:37.498169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.004 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:40.004 malloc0 00:17:40.292 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:40.292 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OgqzLQiEoV 00:17:40.571 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:40.863 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.OgqzLQiEoV 00:17:51.014 Initializing NVMe Controllers 00:17:51.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:51.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:51.014 Initialization complete. Launching workers. 00:17:51.014 ======================================================== 00:17:51.014 Latency(us) 00:17:51.014 Device Information : IOPS MiB/s Average min max 00:17:51.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16406.99 64.09 3900.88 822.27 6002.54 00:17:51.014 ======================================================== 00:17:51.014 Total : 16406.99 64.09 3900.88 822.27 6002.54 00:17:51.014 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OgqzLQiEoV 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OgqzLQiEoV 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2338887 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2338887 /var/tmp/bdevperf.sock 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2338887 ']' 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:51.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.014 [2024-11-18 13:00:48.431200] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:17:51.014 [2024-11-18 13:00:48.431249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2338887 ] 00:17:51.014 [2024-11-18 13:00:48.504526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.014 [2024-11-18 13:00:48.546156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:51.014 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OgqzLQiEoV 00:17:51.274 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:51.533 [2024-11-18 13:00:48.988732] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:51.533 TLSTESTn1 00:17:51.533 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:51.533 Running I/O for 10 seconds... 00:17:53.847 5154.00 IOPS, 20.13 MiB/s [2024-11-18T12:00:52.487Z] 5365.00 IOPS, 20.96 MiB/s [2024-11-18T12:00:53.425Z] 5415.67 IOPS, 21.15 MiB/s [2024-11-18T12:00:54.363Z] 5393.00 IOPS, 21.07 MiB/s [2024-11-18T12:00:55.300Z] 5430.40 IOPS, 21.21 MiB/s [2024-11-18T12:00:56.237Z] 5427.00 IOPS, 21.20 MiB/s [2024-11-18T12:00:57.615Z] 5451.14 IOPS, 21.29 MiB/s [2024-11-18T12:00:58.552Z] 5462.00 IOPS, 21.34 MiB/s [2024-11-18T12:00:59.489Z] 5475.67 IOPS, 21.39 MiB/s [2024-11-18T12:00:59.489Z] 5470.70 IOPS, 21.37 MiB/s 00:18:01.787 Latency(us) 00:18:01.787 [2024-11-18T12:00:59.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.787 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:01.787 Verification LBA range: start 0x0 length 0x2000 00:18:01.787 TLSTESTn1 : 10.02 5472.69 21.38 0.00 0.00 23351.05 4644.51 29633.67 00:18:01.787 [2024-11-18T12:00:59.489Z] =================================================================================================================== 00:18:01.787 [2024-11-18T12:00:59.489Z] Total : 5472.69 21.38 0.00 0.00 23351.05 4644.51 29633.67 00:18:01.787 { 00:18:01.787 "results": [ 00:18:01.787 { 00:18:01.787 "job": "TLSTESTn1", 00:18:01.787 "core_mask": "0x4", 00:18:01.787 "workload": "verify", 00:18:01.787 "status": "finished", 00:18:01.787 "verify_range": { 00:18:01.787 "start": 0, 00:18:01.787 "length": 8192 00:18:01.787 }, 00:18:01.787 "queue_depth": 128, 00:18:01.787 "io_size": 4096, 00:18:01.787 "runtime": 10.019567, 00:18:01.787 "iops": 5472.69158437685, 00:18:01.787 "mibps": 21.37770150147207, 00:18:01.787 "io_failed": 0, 00:18:01.787 "io_timeout": 0, 00:18:01.787 "avg_latency_us": 23351.051438158804, 00:18:01.787 "min_latency_us": 4644.507826086957, 00:18:01.787 "max_latency_us": 29633.66956521739 00:18:01.787 } 00:18:01.787 ], 00:18:01.787 "core_count": 1 00:18:01.787 } 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2338887 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2338887 ']' 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2338887 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2338887 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2338887' 00:18:01.787 killing process with pid 2338887 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2338887 00:18:01.787 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.787 00:18:01.787 Latency(us) 00:18:01.787 [2024-11-18T12:00:59.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.787 [2024-11-18T12:00:59.489Z] =================================================================================================================== 00:18:01.787 [2024-11-18T12:00:59.489Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2338887 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gtTo1Oxmtq 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gtTo1Oxmtq 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gtTo1Oxmtq 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gtTo1Oxmtq 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2340571 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2340571 /var/tmp/bdevperf.sock 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2340571 ']' 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:01.787 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.046 [2024-11-18 13:00:59.499184] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:02.047 [2024-11-18 13:00:59.499232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2340571 ] 00:18:02.047 [2024-11-18 13:00:59.573693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.047 [2024-11-18 13:00:59.615444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.047 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:02.047 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:02.047 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gtTo1Oxmtq 00:18:02.306 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:02.565 [2024-11-18 13:01:00.075346] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.565 [2024-11-18 13:01:00.085712] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:02.565 [2024-11-18 13:01:00.085778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74d170 (107): Transport endpoint is not connected 00:18:02.565 [2024-11-18 13:01:00.086757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74d170 (9): Bad file descriptor 00:18:02.565 [2024-11-18 13:01:00.087759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:02.565 [2024-11-18 13:01:00.087769] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:02.565 [2024-11-18 13:01:00.087777] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:02.565 [2024-11-18 13:01:00.087793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:02.565 request: 00:18:02.565 { 00:18:02.565 "name": "TLSTEST", 00:18:02.565 "trtype": "tcp", 00:18:02.565 "traddr": "10.0.0.2", 00:18:02.565 "adrfam": "ipv4", 00:18:02.565 "trsvcid": "4420", 00:18:02.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.565 "prchk_reftag": false, 00:18:02.565 "prchk_guard": false, 00:18:02.565 "hdgst": false, 00:18:02.565 "ddgst": false, 00:18:02.565 "psk": "key0", 00:18:02.565 "allow_unrecognized_csi": false, 00:18:02.565 "method": "bdev_nvme_attach_controller", 00:18:02.565 "req_id": 1 00:18:02.565 } 00:18:02.565 Got JSON-RPC error response 00:18:02.565 response: 00:18:02.565 { 00:18:02.565 "code": -5, 00:18:02.565 "message": "Input/output error" 00:18:02.565 } 00:18:02.565 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2340571 00:18:02.565 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2340571 ']' 00:18:02.565 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2340571 00:18:02.565 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:02.565 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:02.565 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2340571 00:18:02.565 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:02.565 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:02.565 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2340571' 00:18:02.565 killing process with pid 2340571 00:18:02.566 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2340571 00:18:02.566 Received shutdown signal, test time was about 10.000000 seconds 00:18:02.566 00:18:02.566 Latency(us) 00:18:02.566 [2024-11-18T12:01:00.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.566 [2024-11-18T12:01:00.268Z] =================================================================================================================== 00:18:02.566 [2024-11-18T12:01:00.268Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:02.566 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2340571 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OgqzLQiEoV 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OgqzLQiEoV 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OgqzLQiEoV 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OgqzLQiEoV 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2340742 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2340742 /var/tmp/bdevperf.sock 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2340742 ']' 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:02.825 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.825 [2024-11-18 13:01:00.363647] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:02.825 [2024-11-18 13:01:00.363698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2340742 ] 00:18:02.825 [2024-11-18 13:01:00.430490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.825 [2024-11-18 13:01:00.467495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.085 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:03.085 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:03.085 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OgqzLQiEoV 00:18:03.085 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:03.344 [2024-11-18 13:01:00.934714] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.344 [2024-11-18 13:01:00.939410] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:03.344 [2024-11-18 13:01:00.939434] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:03.344 [2024-11-18 13:01:00.939475] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:03.344 [2024-11-18 13:01:00.940175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118d170 (107): Transport endpoint is not connected 00:18:03.344 [2024-11-18 13:01:00.941166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118d170 (9): Bad file descriptor 00:18:03.344 [2024-11-18 13:01:00.942167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:03.344 [2024-11-18 13:01:00.942181] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:03.344 [2024-11-18 13:01:00.942192] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:03.344 [2024-11-18 13:01:00.942203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:03.344 request: 00:18:03.344 { 00:18:03.344 "name": "TLSTEST", 00:18:03.344 "trtype": "tcp", 00:18:03.344 "traddr": "10.0.0.2", 00:18:03.344 "adrfam": "ipv4", 00:18:03.344 "trsvcid": "4420", 00:18:03.344 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.344 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:03.344 "prchk_reftag": false, 00:18:03.344 "prchk_guard": false, 00:18:03.344 "hdgst": false, 00:18:03.344 "ddgst": false, 00:18:03.344 "psk": "key0", 00:18:03.344 "allow_unrecognized_csi": false, 00:18:03.344 "method": "bdev_nvme_attach_controller", 00:18:03.344 "req_id": 1 00:18:03.344 } 00:18:03.345 Got JSON-RPC error response 00:18:03.345 response: 00:18:03.345 { 00:18:03.345 "code": -5, 00:18:03.345 "message": "Input/output error" 00:18:03.345 } 00:18:03.345 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2340742 00:18:03.345 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2340742 ']' 00:18:03.345 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2340742 00:18:03.345 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:03.345 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:03.345 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2340742 00:18:03.345 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:03.345 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:03.345 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2340742' 00:18:03.345 killing process with pid 2340742 00:18:03.345 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2340742 00:18:03.345 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.345 00:18:03.345 Latency(us) 00:18:03.345 [2024-11-18T12:01:01.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.345 [2024-11-18T12:01:01.047Z] =================================================================================================================== 00:18:03.345 [2024-11-18T12:01:01.047Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:03.345 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2340742 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OgqzLQiEoV 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OgqzLQiEoV 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OgqzLQiEoV 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OgqzLQiEoV 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2340976 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2340976 /var/tmp/bdevperf.sock 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2340976 ']' 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:03.604 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.604 [2024-11-18 13:01:01.227079] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:03.604 [2024-11-18 13:01:01.227127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2340976 ] 00:18:03.604 [2024-11-18 13:01:01.302497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.863 [2024-11-18 13:01:01.344605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.863 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:03.863 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:03.863 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OgqzLQiEoV 00:18:04.123 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:04.123 [2024-11-18 13:01:01.799302] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.123 [2024-11-18 13:01:01.809986] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:04.123 [2024-11-18 13:01:01.810009] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:04.123 [2024-11-18 13:01:01.810033] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:04.123 [2024-11-18 13:01:01.810675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2164170 (107): Transport endpoint is not connected 00:18:04.123 [2024-11-18 13:01:01.811670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2164170 (9): Bad file descriptor 00:18:04.123 [2024-11-18 13:01:01.812671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:04.123 [2024-11-18 13:01:01.812680] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:04.123 [2024-11-18 13:01:01.812689] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:04.123 [2024-11-18 13:01:01.812701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:04.123 request: 00:18:04.123 { 00:18:04.123 "name": "TLSTEST", 00:18:04.123 "trtype": "tcp", 00:18:04.123 "traddr": "10.0.0.2", 00:18:04.123 "adrfam": "ipv4", 00:18:04.123 "trsvcid": "4420", 00:18:04.123 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:04.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.123 "prchk_reftag": false, 00:18:04.123 "prchk_guard": false, 00:18:04.123 "hdgst": false, 00:18:04.123 "ddgst": false, 00:18:04.123 "psk": "key0", 00:18:04.123 "allow_unrecognized_csi": false, 00:18:04.123 "method": "bdev_nvme_attach_controller", 00:18:04.123 "req_id": 1 00:18:04.123 } 00:18:04.123 Got JSON-RPC error response 00:18:04.123 response: 00:18:04.123 { 00:18:04.123 "code": -5, 00:18:04.123 "message": "Input/output error" 00:18:04.123 } 00:18:04.383 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2340976 00:18:04.383 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2340976 ']' 00:18:04.383 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2340976 00:18:04.383 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:04.383 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:04.383 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2340976 00:18:04.383 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:04.383 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:04.383 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2340976' 00:18:04.383 killing process with pid 2340976 00:18:04.383 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2340976 00:18:04.383 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.383 00:18:04.383 Latency(us) 00:18:04.383 [2024-11-18T12:01:02.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.383 [2024-11-18T12:01:02.085Z] =================================================================================================================== 00:18:04.383 [2024-11-18T12:01:02.085Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.383 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2340976 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2340995 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2340995 /var/tmp/bdevperf.sock 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2340995 ']' 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:04.383 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.643 [2024-11-18 13:01:02.101791] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:04.643 [2024-11-18 13:01:02.101842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2340995 ] 00:18:04.643 [2024-11-18 13:01:02.168374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.643 [2024-11-18 13:01:02.205666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.643 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:04.643 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:04.643 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:04.902 [2024-11-18 13:01:02.484245] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:04.902 [2024-11-18 13:01:02.484277] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:04.902 request: 00:18:04.902 { 00:18:04.902 "name": "key0", 00:18:04.902 "path": "", 00:18:04.902 "method": "keyring_file_add_key", 00:18:04.902 "req_id": 1 00:18:04.902 } 00:18:04.902 Got JSON-RPC error response 00:18:04.902 response: 00:18:04.902 { 00:18:04.902 "code": -1, 00:18:04.902 "message": "Operation not permitted" 00:18:04.902 } 00:18:04.902 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:05.162 [2024-11-18 13:01:02.664803] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:05.162 [2024-11-18 13:01:02.664834] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:05.162 request: 00:18:05.162 { 00:18:05.162 "name": "TLSTEST", 00:18:05.162 "trtype": "tcp", 00:18:05.162 "traddr": "10.0.0.2", 00:18:05.162 "adrfam": "ipv4", 00:18:05.162 "trsvcid": "4420", 00:18:05.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:05.162 "prchk_reftag": false, 00:18:05.162 "prchk_guard": false, 00:18:05.162 "hdgst": false, 00:18:05.162 "ddgst": false, 00:18:05.162 "psk": "key0", 00:18:05.162 "allow_unrecognized_csi": false, 00:18:05.162 "method": "bdev_nvme_attach_controller", 00:18:05.162 "req_id": 1 00:18:05.162 } 00:18:05.162 Got JSON-RPC error response 00:18:05.162 response: 00:18:05.162 { 00:18:05.162 "code": -126, 00:18:05.162 "message": "Required key not available" 00:18:05.162 } 00:18:05.162 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2340995 00:18:05.162 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2340995 ']' 00:18:05.162 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2340995 00:18:05.162 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:05.162 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:05.162 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2340995 00:18:05.162 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:05.162 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:05.162 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2340995' 00:18:05.162 killing process with pid 2340995 00:18:05.162 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2340995 00:18:05.162 Received shutdown signal, test time was about 10.000000 seconds 00:18:05.162 00:18:05.162 Latency(us) 00:18:05.162 [2024-11-18T12:01:02.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.162 [2024-11-18T12:01:02.864Z] =================================================================================================================== 00:18:05.162 [2024-11-18T12:01:02.864Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:05.162 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2340995 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2336439 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2336439 ']' 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2336439 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2336439 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2336439' 00:18:05.421 killing process with pid 2336439 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2336439 00:18:05.421 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2336439 00:18:05.421 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:05.421 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:05.421 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:05.421 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:05.422 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:05.422 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:05.422 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Brr5j5DZrC 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Brr5j5DZrC 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2341242 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2341242 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2341242 ']' 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:05.681 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.681 [2024-11-18 13:01:03.197843] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:05.681 [2024-11-18 13:01:03.197890] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.681 [2024-11-18 13:01:03.275808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.681 [2024-11-18 13:01:03.316480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.681 [2024-11-18 13:01:03.316517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.681 [2024-11-18 13:01:03.316524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.681 [2024-11-18 13:01:03.316531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.681 [2024-11-18 13:01:03.316536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.681 [2024-11-18 13:01:03.317128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.940 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:05.940 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:05.940 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:05.940 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:05.941 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.941 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.941 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Brr5j5DZrC 00:18:05.941 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Brr5j5DZrC 00:18:05.941 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:05.941 [2024-11-18 13:01:03.633468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.200 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:06.200 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:06.459 [2024-11-18 13:01:04.046537] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:06.459 [2024-11-18 13:01:04.046734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.459 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:06.719 malloc0 00:18:06.719 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:06.978 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Brr5j5DZrC 00:18:06.978 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:07.237 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Brr5j5DZrC 00:18:07.237 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:07.237 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:07.237 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:07.237 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Brr5j5DZrC 00:18:07.237 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:07.237 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:07.237 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2341496 00:18:07.237 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:07.237 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2341496 /var/tmp/bdevperf.sock 00:18:07.237 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2341496 ']' 00:18:07.238 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.238 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:07.238 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.238 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:07.238 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.238 [2024-11-18 13:01:04.883613] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:07.238 [2024-11-18 13:01:04.883661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2341496 ] 00:18:07.497 [2024-11-18 13:01:04.957300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.497 [2024-11-18 13:01:04.997968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.497 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:07.497 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:07.497 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Brr5j5DZrC 00:18:07.756 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:08.015 [2024-11-18 13:01:05.481095] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:08.015 TLSTESTn1 00:18:08.015 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:08.015 Running I/O for 10 seconds... 00:18:10.330 5283.00 IOPS, 20.64 MiB/s [2024-11-18T12:01:08.968Z] 5341.50 IOPS, 20.87 MiB/s [2024-11-18T12:01:09.913Z] 5350.67 IOPS, 20.90 MiB/s [2024-11-18T12:01:10.850Z] 5326.25 IOPS, 20.81 MiB/s [2024-11-18T12:01:11.788Z] 5345.60 IOPS, 20.88 MiB/s [2024-11-18T12:01:12.725Z] 5361.00 IOPS, 20.94 MiB/s [2024-11-18T12:01:14.102Z] 5379.57 IOPS, 21.01 MiB/s [2024-11-18T12:01:15.039Z] 5397.38 IOPS, 21.08 MiB/s [2024-11-18T12:01:15.977Z] 5410.33 IOPS, 21.13 MiB/s [2024-11-18T12:01:15.977Z] 5416.80 IOPS, 21.16 MiB/s 00:18:18.275 Latency(us) 00:18:18.275 [2024-11-18T12:01:15.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.275 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:18.275 Verification LBA range: start 0x0 length 0x2000 00:18:18.275 TLSTESTn1 : 10.02 5420.51 21.17 0.00 0.00 23577.49 6496.61 23251.03 00:18:18.275 [2024-11-18T12:01:15.977Z] =================================================================================================================== 00:18:18.275 [2024-11-18T12:01:15.977Z] Total : 5420.51 21.17 0.00 0.00 23577.49 6496.61 23251.03 00:18:18.275 { 00:18:18.275 "results": [ 00:18:18.275 { 00:18:18.275 "job": "TLSTESTn1", 00:18:18.275 "core_mask": "0x4", 00:18:18.275 "workload": "verify", 00:18:18.275 "status": "finished", 00:18:18.275 "verify_range": { 00:18:18.275 "start": 0, 00:18:18.275 "length": 8192 00:18:18.275 }, 00:18:18.275 "queue_depth": 128, 00:18:18.275 "io_size": 4096, 00:18:18.275 "runtime": 10.016592, 00:18:18.275 "iops": 5420.506295953754, 00:18:18.275 "mibps": 21.17385271856935, 00:18:18.275 "io_failed": 0, 00:18:18.275 "io_timeout": 0, 00:18:18.275 "avg_latency_us": 23577.49498707944, 00:18:18.275 "min_latency_us": 6496.612173913043, 00:18:18.275 "max_latency_us": 23251.03304347826 00:18:18.275 } 00:18:18.275 ], 00:18:18.275 "core_count": 1 00:18:18.275 } 00:18:18.275 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:18.275 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2341496 00:18:18.275 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2341496 ']' 00:18:18.275 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2341496 00:18:18.275 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:18.275 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:18.275 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2341496 00:18:18.275 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:18.275 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:18.275 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2341496' 00:18:18.275 killing process with pid 2341496 00:18:18.275 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2341496 00:18:18.275 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.275 00:18:18.275 Latency(us) 00:18:18.275 [2024-11-18T12:01:15.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.275 [2024-11-18T12:01:15.977Z] =================================================================================================================== 00:18:18.275 [2024-11-18T12:01:15.977Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:18.275 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2341496 00:18:18.275 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Brr5j5DZrC 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Brr5j5DZrC 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Brr5j5DZrC 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Brr5j5DZrC 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Brr5j5DZrC 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2343330 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2343330 /var/tmp/bdevperf.sock 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2343330 ']' 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:18.276 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.536 [2024-11-18 13:01:15.990227] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:18.536 [2024-11-18 13:01:15.990276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343330 ] 00:18:18.536 [2024-11-18 13:01:16.053565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.536 [2024-11-18 13:01:16.090324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.536 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:18.536 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:18.536 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Brr5j5DZrC 00:18:18.795 [2024-11-18 13:01:16.356670] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Brr5j5DZrC': 0100666 00:18:18.795 [2024-11-18 13:01:16.356699] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:18.795 request: 00:18:18.795 { 00:18:18.795 "name": "key0", 00:18:18.795 "path": "/tmp/tmp.Brr5j5DZrC", 00:18:18.795 "method": "keyring_file_add_key", 00:18:18.795 "req_id": 1 00:18:18.795 } 00:18:18.795 Got JSON-RPC error response 00:18:18.795 response: 00:18:18.795 { 00:18:18.795 "code": -1, 00:18:18.795 "message": "Operation not permitted" 00:18:18.795 } 00:18:18.795 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:19.055 [2024-11-18 13:01:16.565288] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.055 [2024-11-18 13:01:16.565314] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:19.055 request: 00:18:19.055 { 00:18:19.055 "name": "TLSTEST", 00:18:19.055 "trtype": "tcp", 00:18:19.055 "traddr": "10.0.0.2", 00:18:19.055 "adrfam": "ipv4", 00:18:19.055 "trsvcid": "4420", 00:18:19.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.055 "prchk_reftag": false, 00:18:19.055 "prchk_guard": false, 00:18:19.055 "hdgst": false, 00:18:19.055 "ddgst": false, 00:18:19.055 "psk": "key0", 00:18:19.055 "allow_unrecognized_csi": false, 00:18:19.055 "method": "bdev_nvme_attach_controller", 00:18:19.055 "req_id": 1 00:18:19.055 } 00:18:19.055 Got JSON-RPC error response 00:18:19.055 response: 00:18:19.055 { 00:18:19.055 "code": -126, 00:18:19.055 "message": "Required key not available" 00:18:19.055 } 00:18:19.055 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2343330 00:18:19.055 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2343330 ']' 00:18:19.055 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2343330 00:18:19.055 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:19.056 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:19.056 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2343330 00:18:19.056 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:19.056 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:19.056 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2343330' 00:18:19.056 killing process with pid 2343330 00:18:19.056 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2343330 00:18:19.056 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.056 00:18:19.056 Latency(us) 00:18:19.056 [2024-11-18T12:01:16.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.056 [2024-11-18T12:01:16.758Z] =================================================================================================================== 00:18:19.056 [2024-11-18T12:01:16.758Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.056 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2343330 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2341242 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2341242 ']' 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2341242 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2341242 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2341242' 00:18:19.316 killing process with pid 2341242 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2341242 00:18:19.316 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2341242 00:18:19.575 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:19.575 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.575 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:19.575 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.575 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2343572 00:18:19.575 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:19.575 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2343572 00:18:19.575 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2343572 ']' 00:18:19.575 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.575 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:19.575 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.575 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:19.575 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.575 [2024-11-18 13:01:17.079314] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:19.575 [2024-11-18 13:01:17.079366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.575 [2024-11-18 13:01:17.159061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.575 [2024-11-18 13:01:17.199197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.576 [2024-11-18 13:01:17.199236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.576 [2024-11-18 13:01:17.199243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.576 [2024-11-18 13:01:17.199249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.576 [2024-11-18 13:01:17.199258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.576 [2024-11-18 13:01:17.199841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Brr5j5DZrC 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Brr5j5DZrC 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.Brr5j5DZrC 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Brr5j5DZrC 00:18:19.834 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:19.834 [2024-11-18 13:01:17.510159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.093 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:20.093 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:20.351 [2024-11-18 13:01:17.899173] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:20.351 [2024-11-18 13:01:17.899374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.351 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:20.610 malloc0 00:18:20.610 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:20.869 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Brr5j5DZrC 00:18:20.869 [2024-11-18 13:01:18.504907] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Brr5j5DZrC': 0100666 00:18:20.869 [2024-11-18 13:01:18.504933] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:20.869 request: 00:18:20.869 { 00:18:20.869 "name": "key0", 00:18:20.869 "path": "/tmp/tmp.Brr5j5DZrC", 00:18:20.869 "method": "keyring_file_add_key", 00:18:20.869 "req_id": 1 00:18:20.869 } 00:18:20.869 Got JSON-RPC error response 00:18:20.869 response: 00:18:20.869 { 00:18:20.869 "code": -1, 00:18:20.869 "message": "Operation not permitted" 00:18:20.869 } 00:18:20.869 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:21.129 [2024-11-18 13:01:18.693418] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:21.129 [2024-11-18 13:01:18.693452] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:21.129 request: 00:18:21.129 { 00:18:21.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.129 "host": "nqn.2016-06.io.spdk:host1", 00:18:21.129 "psk": "key0", 00:18:21.129 "method": "nvmf_subsystem_add_host", 00:18:21.129 "req_id": 1 00:18:21.129 } 00:18:21.129 Got JSON-RPC error response 00:18:21.129 response: 00:18:21.129 { 00:18:21.129 "code": -32603, 00:18:21.129 "message": "Internal error" 00:18:21.129 } 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2343572 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2343572 ']' 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2343572 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2343572 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2343572' 00:18:21.129 killing process with pid 2343572 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2343572 00:18:21.129 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2343572 00:18:21.389 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Brr5j5DZrC 00:18:21.389 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:21.389 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:21.389 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:21.389 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.389 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:21.389 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2343843 00:18:21.389 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2343843 00:18:21.389 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2343843 ']' 00:18:21.389 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.389 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:21.389 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.389 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:21.389 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.389 [2024-11-18 13:01:18.997906] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:21.389 [2024-11-18 13:01:18.997953] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.389 [2024-11-18 13:01:19.054944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.648 [2024-11-18 13:01:19.093658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.648 [2024-11-18 13:01:19.093692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.648 [2024-11-18 13:01:19.093700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.648 [2024-11-18 13:01:19.093706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.648 [2024-11-18 13:01:19.093711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.648 [2024-11-18 13:01:19.094275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.648 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:21.648 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:21.648 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:21.648 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:21.649 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.649 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.649 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Brr5j5DZrC 00:18:21.649 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Brr5j5DZrC 00:18:21.649 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:21.908 [2024-11-18 13:01:19.408724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.908 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:22.167 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:22.167 [2024-11-18 13:01:19.805741] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.167 [2024-11-18 13:01:19.805961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.167 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:22.426 malloc0 00:18:22.426 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:22.686 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Brr5j5DZrC 00:18:22.946 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:23.205 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:23.205 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2344105 00:18:23.205 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.205 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2344105 /var/tmp/bdevperf.sock 00:18:23.205 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2344105 ']' 00:18:23.205 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.205 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:23.205 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.205 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:23.205 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.205 [2024-11-18 13:01:20.681854] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:23.205 [2024-11-18 13:01:20.681902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2344105 ] 00:18:23.205 [2024-11-18 13:01:20.760267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.205 [2024-11-18 13:01:20.802597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.205 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:23.205 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:23.205 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Brr5j5DZrC 00:18:23.465 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:23.724 [2024-11-18 13:01:21.269601] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:23.724 TLSTESTn1 00:18:23.724 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:23.984 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:23.984 "subsystems": [ 00:18:23.984 { 00:18:23.984 "subsystem": "keyring", 00:18:23.984 "config": [ 00:18:23.984 { 00:18:23.984 "method": "keyring_file_add_key", 00:18:23.984 "params": { 00:18:23.984 "name": "key0", 00:18:23.984 "path": "/tmp/tmp.Brr5j5DZrC" 00:18:23.984 } 00:18:23.984 } 00:18:23.984 ] 00:18:23.984 }, 00:18:23.984 { 00:18:23.984 "subsystem": "iobuf", 00:18:23.984 "config": [ 00:18:23.984 { 00:18:23.984 "method": "iobuf_set_options", 00:18:23.984 "params": { 00:18:23.984 "small_pool_count": 8192, 00:18:23.984 "large_pool_count": 1024, 00:18:23.984 "small_bufsize": 8192, 00:18:23.984 "large_bufsize": 135168, 00:18:23.984 "enable_numa": false 00:18:23.984 } 00:18:23.984 } 00:18:23.984 ] 00:18:23.984 }, 00:18:23.984 { 00:18:23.984 "subsystem": "sock", 00:18:23.984 "config": [ 00:18:23.984 { 00:18:23.984 "method": "sock_set_default_impl", 00:18:23.984 "params": { 00:18:23.984 "impl_name": "posix" 00:18:23.984 } 00:18:23.984 }, 00:18:23.984 { 00:18:23.984 "method": "sock_impl_set_options", 00:18:23.984 "params": { 00:18:23.984 "impl_name": "ssl", 00:18:23.984 "recv_buf_size": 4096, 00:18:23.984 "send_buf_size": 4096, 00:18:23.984 "enable_recv_pipe": true, 00:18:23.984 "enable_quickack": false, 00:18:23.984 "enable_placement_id": 0, 00:18:23.984 "enable_zerocopy_send_server": true, 00:18:23.984 "enable_zerocopy_send_client": false, 00:18:23.984 "zerocopy_threshold": 0, 00:18:23.984 "tls_version": 0, 00:18:23.984 "enable_ktls": false 00:18:23.984 } 00:18:23.984 }, 00:18:23.984 { 00:18:23.984 "method": "sock_impl_set_options", 00:18:23.984 "params": { 00:18:23.984 "impl_name": "posix", 00:18:23.984 "recv_buf_size": 2097152, 00:18:23.984 "send_buf_size": 2097152, 00:18:23.984 "enable_recv_pipe": true, 00:18:23.984 "enable_quickack": false, 00:18:23.984 "enable_placement_id": 0, 00:18:23.984 "enable_zerocopy_send_server": true, 00:18:23.984 "enable_zerocopy_send_client": false, 00:18:23.984 "zerocopy_threshold": 0, 00:18:23.984 "tls_version": 0, 00:18:23.984 "enable_ktls": false 00:18:23.984 } 00:18:23.984 } 00:18:23.984 ] 00:18:23.984 }, 00:18:23.984 { 00:18:23.984 "subsystem": "vmd", 00:18:23.984 "config": [] 00:18:23.984 }, 00:18:23.984 { 00:18:23.984 "subsystem": "accel", 00:18:23.984 "config": [ 00:18:23.984 { 00:18:23.984 "method": "accel_set_options", 00:18:23.984 "params": { 00:18:23.984 "small_cache_size": 128, 00:18:23.984 "large_cache_size": 16, 00:18:23.984 "task_count": 2048, 00:18:23.984 "sequence_count": 2048, 00:18:23.984 "buf_count": 2048 00:18:23.984 } 00:18:23.984 } 00:18:23.984 ] 00:18:23.984 }, 00:18:23.984 { 00:18:23.984 "subsystem": "bdev", 00:18:23.984 "config": [ 00:18:23.984 { 00:18:23.984 "method": "bdev_set_options", 00:18:23.984 "params": { 00:18:23.984 "bdev_io_pool_size": 65535, 00:18:23.984 "bdev_io_cache_size": 256, 00:18:23.984 "bdev_auto_examine": true, 00:18:23.984 "iobuf_small_cache_size": 128, 00:18:23.984 "iobuf_large_cache_size": 16 00:18:23.984 } 00:18:23.984 }, 00:18:23.984 { 00:18:23.984 "method": "bdev_raid_set_options", 00:18:23.984 "params": { 00:18:23.984 "process_window_size_kb": 1024, 00:18:23.984 "process_max_bandwidth_mb_sec": 0 00:18:23.985 } 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "method": "bdev_iscsi_set_options", 00:18:23.985 "params": { 00:18:23.985 "timeout_sec": 30 00:18:23.985 } 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "method": "bdev_nvme_set_options", 00:18:23.985 "params": { 00:18:23.985 "action_on_timeout": "none", 00:18:23.985 "timeout_us": 0, 00:18:23.985 "timeout_admin_us": 0, 00:18:23.985 "keep_alive_timeout_ms": 10000, 00:18:23.985 "arbitration_burst": 0, 00:18:23.985 "low_priority_weight": 0, 00:18:23.985 "medium_priority_weight": 0, 00:18:23.985 "high_priority_weight": 0, 00:18:23.985 "nvme_adminq_poll_period_us": 10000, 00:18:23.985 "nvme_ioq_poll_period_us": 0, 00:18:23.985 "io_queue_requests": 0, 00:18:23.985 "delay_cmd_submit": true, 00:18:23.985 "transport_retry_count": 4, 00:18:23.985 "bdev_retry_count": 3, 00:18:23.985 "transport_ack_timeout": 0, 00:18:23.985 "ctrlr_loss_timeout_sec": 0, 00:18:23.985 "reconnect_delay_sec": 0, 00:18:23.985 "fast_io_fail_timeout_sec": 0, 00:18:23.985 "disable_auto_failback": false, 00:18:23.985 "generate_uuids": false, 00:18:23.985 "transport_tos": 0, 00:18:23.985 "nvme_error_stat": false, 00:18:23.985 "rdma_srq_size": 0, 00:18:23.985 "io_path_stat": false, 00:18:23.985 "allow_accel_sequence": false, 00:18:23.985 "rdma_max_cq_size": 0, 00:18:23.985 "rdma_cm_event_timeout_ms": 0, 00:18:23.985 "dhchap_digests": [ 00:18:23.985 "sha256", 00:18:23.985 "sha384", 00:18:23.985 "sha512" 00:18:23.985 ], 00:18:23.985 "dhchap_dhgroups": [ 00:18:23.985 "null", 00:18:23.985 "ffdhe2048", 00:18:23.985 "ffdhe3072", 00:18:23.985 "ffdhe4096", 00:18:23.985 "ffdhe6144", 00:18:23.985 "ffdhe8192" 00:18:23.985 ] 00:18:23.985 } 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "method": "bdev_nvme_set_hotplug", 00:18:23.985 "params": { 00:18:23.985 "period_us": 100000, 00:18:23.985 "enable": false 00:18:23.985 } 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "method": "bdev_malloc_create", 00:18:23.985 "params": { 00:18:23.985 "name": "malloc0", 00:18:23.985 "num_blocks": 8192, 00:18:23.985 "block_size": 4096, 00:18:23.985 "physical_block_size": 4096, 00:18:23.985 "uuid": "35c3bcc6-2172-4c26-b76c-a9fd02d5828e", 00:18:23.985 "optimal_io_boundary": 0, 00:18:23.985 "md_size": 0, 00:18:23.985 "dif_type": 0, 00:18:23.985 "dif_is_head_of_md": false, 00:18:23.985 "dif_pi_format": 0 00:18:23.985 } 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "method": "bdev_wait_for_examine" 00:18:23.985 } 00:18:23.985 ] 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "subsystem": "nbd", 00:18:23.985 "config": [] 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "subsystem": "scheduler", 00:18:23.985 "config": [ 00:18:23.985 { 00:18:23.985 "method": "framework_set_scheduler", 00:18:23.985 "params": { 00:18:23.985 "name": "static" 00:18:23.985 } 00:18:23.985 } 00:18:23.985 ] 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "subsystem": "nvmf", 00:18:23.985 "config": [ 00:18:23.985 { 00:18:23.985 "method": "nvmf_set_config", 00:18:23.985 "params": { 00:18:23.985 "discovery_filter": "match_any", 00:18:23.985 "admin_cmd_passthru": { 00:18:23.985 "identify_ctrlr": false 00:18:23.985 }, 00:18:23.985 "dhchap_digests": [ 00:18:23.985 "sha256", 00:18:23.985 "sha384", 00:18:23.985 "sha512" 00:18:23.985 ], 00:18:23.985 "dhchap_dhgroups": [ 00:18:23.985 "null", 00:18:23.985 "ffdhe2048", 00:18:23.985 "ffdhe3072", 00:18:23.985 "ffdhe4096", 00:18:23.985 "ffdhe6144", 00:18:23.985 "ffdhe8192" 00:18:23.985 ] 00:18:23.985 } 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "method": "nvmf_set_max_subsystems", 00:18:23.985 "params": { 00:18:23.985 "max_subsystems": 1024 00:18:23.985 } 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "method": "nvmf_set_crdt", 00:18:23.985 "params": { 00:18:23.985 "crdt1": 0, 00:18:23.985 "crdt2": 0, 00:18:23.985 "crdt3": 0 00:18:23.985 } 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "method": "nvmf_create_transport", 00:18:23.985 "params": { 00:18:23.985 "trtype": "TCP", 00:18:23.985 "max_queue_depth": 128, 00:18:23.985 "max_io_qpairs_per_ctrlr": 127, 00:18:23.985 "in_capsule_data_size": 4096, 00:18:23.985 "max_io_size": 131072, 00:18:23.985 "io_unit_size": 131072, 00:18:23.985 "max_aq_depth": 128, 00:18:23.985 "num_shared_buffers": 511, 00:18:23.985 "buf_cache_size": 4294967295, 00:18:23.985 "dif_insert_or_strip": false, 00:18:23.985 "zcopy": false, 00:18:23.985 "c2h_success": false, 00:18:23.985 "sock_priority": 0, 00:18:23.985 "abort_timeout_sec": 1, 00:18:23.985 "ack_timeout": 0, 00:18:23.985 "data_wr_pool_size": 0 00:18:23.985 } 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "method": "nvmf_create_subsystem", 00:18:23.985 "params": { 00:18:23.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.985 "allow_any_host": false, 00:18:23.985 "serial_number": "SPDK00000000000001", 00:18:23.985 "model_number": "SPDK bdev Controller", 00:18:23.985 "max_namespaces": 10, 00:18:23.985 "min_cntlid": 1, 00:18:23.985 "max_cntlid": 65519, 00:18:23.985 "ana_reporting": false 00:18:23.985 } 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "method": "nvmf_subsystem_add_host", 00:18:23.985 "params": { 00:18:23.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.985 "host": "nqn.2016-06.io.spdk:host1", 00:18:23.985 "psk": "key0" 00:18:23.985 } 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "method": "nvmf_subsystem_add_ns", 00:18:23.985 "params": { 00:18:23.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.985 "namespace": { 00:18:23.985 "nsid": 1, 00:18:23.985 "bdev_name": "malloc0", 00:18:23.985 "nguid": "35C3BCC621724C26B76CA9FD02D5828E", 00:18:23.985 "uuid": "35c3bcc6-2172-4c26-b76c-a9fd02d5828e", 00:18:23.985 "no_auto_visible": false 00:18:23.985 } 00:18:23.985 } 00:18:23.985 }, 00:18:23.985 { 00:18:23.985 "method": "nvmf_subsystem_add_listener", 00:18:23.985 "params": { 00:18:23.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.985 "listen_address": { 00:18:23.985 "trtype": "TCP", 00:18:23.985 "adrfam": "IPv4", 00:18:23.985 "traddr": "10.0.0.2", 00:18:23.985 "trsvcid": "4420" 00:18:23.985 }, 00:18:23.985 "secure_channel": true 00:18:23.985 } 00:18:23.985 } 00:18:23.985 ] 00:18:23.985 } 00:18:23.985 ] 00:18:23.985 }' 00:18:23.985 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:24.246 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:24.246 "subsystems": [ 00:18:24.246 { 00:18:24.246 "subsystem": "keyring", 00:18:24.246 "config": [ 00:18:24.246 { 00:18:24.246 "method": "keyring_file_add_key", 00:18:24.246 "params": { 00:18:24.246 "name": "key0", 00:18:24.246 "path": "/tmp/tmp.Brr5j5DZrC" 00:18:24.246 } 00:18:24.246 } 00:18:24.246 ] 00:18:24.246 }, 00:18:24.246 { 00:18:24.246 "subsystem": "iobuf", 00:18:24.246 "config": [ 00:18:24.246 { 00:18:24.246 "method": "iobuf_set_options", 00:18:24.246 "params": { 00:18:24.246 "small_pool_count": 8192, 00:18:24.246 "large_pool_count": 1024, 00:18:24.246 "small_bufsize": 8192, 00:18:24.246 "large_bufsize": 135168, 00:18:24.246 "enable_numa": false 00:18:24.246 } 00:18:24.246 } 00:18:24.246 ] 00:18:24.246 }, 00:18:24.246 { 00:18:24.246 "subsystem": "sock", 00:18:24.246 "config": [ 00:18:24.246 { 00:18:24.246 "method": "sock_set_default_impl", 00:18:24.246 "params": { 00:18:24.246 "impl_name": "posix" 00:18:24.246 } 00:18:24.246 }, 00:18:24.246 { 00:18:24.246 "method": "sock_impl_set_options", 00:18:24.246 "params": { 00:18:24.246 "impl_name": "ssl", 00:18:24.246 "recv_buf_size": 4096, 00:18:24.246 "send_buf_size": 4096, 00:18:24.246 "enable_recv_pipe": true, 00:18:24.246 "enable_quickack": false, 00:18:24.246 "enable_placement_id": 0, 00:18:24.246 "enable_zerocopy_send_server": true, 00:18:24.246 "enable_zerocopy_send_client": false, 00:18:24.246 "zerocopy_threshold": 0, 00:18:24.246 "tls_version": 0, 00:18:24.246 "enable_ktls": false 00:18:24.246 } 00:18:24.246 }, 00:18:24.246 { 00:18:24.246 "method": "sock_impl_set_options", 00:18:24.246 "params": { 00:18:24.246 "impl_name": "posix", 00:18:24.246 "recv_buf_size": 2097152, 00:18:24.246 "send_buf_size": 2097152, 00:18:24.246 "enable_recv_pipe": true, 00:18:24.246 "enable_quickack": false, 00:18:24.246 "enable_placement_id": 0, 00:18:24.246 "enable_zerocopy_send_server": true, 00:18:24.246 "enable_zerocopy_send_client": false, 00:18:24.246 "zerocopy_threshold": 0, 00:18:24.246 "tls_version": 0, 00:18:24.246 "enable_ktls": false 00:18:24.246 } 00:18:24.246 } 00:18:24.246 ] 00:18:24.246 }, 00:18:24.246 { 00:18:24.246 "subsystem": "vmd", 00:18:24.246 "config": [] 00:18:24.246 }, 00:18:24.246 { 00:18:24.246 "subsystem": "accel", 00:18:24.246 "config": [ 00:18:24.246 { 00:18:24.246 "method": "accel_set_options", 00:18:24.246 "params": { 00:18:24.246 "small_cache_size": 128, 00:18:24.246 "large_cache_size": 16, 00:18:24.246 "task_count": 2048, 00:18:24.246 "sequence_count": 2048, 00:18:24.246 "buf_count": 2048 00:18:24.246 } 00:18:24.246 } 00:18:24.246 ] 00:18:24.246 }, 00:18:24.246 { 00:18:24.246 "subsystem": "bdev", 00:18:24.246 "config": [ 00:18:24.246 { 00:18:24.246 "method": "bdev_set_options", 00:18:24.246 "params": { 00:18:24.246 "bdev_io_pool_size": 65535, 00:18:24.246 "bdev_io_cache_size": 256, 00:18:24.246 "bdev_auto_examine": true, 00:18:24.246 "iobuf_small_cache_size": 128, 00:18:24.246 "iobuf_large_cache_size": 16 00:18:24.246 } 00:18:24.246 }, 00:18:24.246 { 00:18:24.246 "method": "bdev_raid_set_options", 00:18:24.246 "params": { 00:18:24.246 "process_window_size_kb": 1024, 00:18:24.246 "process_max_bandwidth_mb_sec": 0 00:18:24.246 } 00:18:24.246 }, 00:18:24.246 { 00:18:24.246 "method": "bdev_iscsi_set_options", 00:18:24.246 "params": { 00:18:24.246 "timeout_sec": 30 00:18:24.246 } 00:18:24.246 }, 00:18:24.246 { 00:18:24.246 "method": "bdev_nvme_set_options", 00:18:24.246 "params": { 00:18:24.246 "action_on_timeout": "none", 00:18:24.246 "timeout_us": 0, 00:18:24.246 "timeout_admin_us": 0, 00:18:24.246 "keep_alive_timeout_ms": 10000, 00:18:24.246 "arbitration_burst": 0, 00:18:24.246 "low_priority_weight": 0, 00:18:24.246 "medium_priority_weight": 0, 00:18:24.246 "high_priority_weight": 0, 00:18:24.246 "nvme_adminq_poll_period_us": 10000, 00:18:24.246 "nvme_ioq_poll_period_us": 0, 00:18:24.246 "io_queue_requests": 512, 00:18:24.246 "delay_cmd_submit": true, 00:18:24.246 "transport_retry_count": 4, 00:18:24.246 "bdev_retry_count": 3, 00:18:24.246 "transport_ack_timeout": 0, 00:18:24.246 "ctrlr_loss_timeout_sec": 0, 00:18:24.246 "reconnect_delay_sec": 0, 00:18:24.246 "fast_io_fail_timeout_sec": 0, 00:18:24.247 "disable_auto_failback": false, 00:18:24.247 "generate_uuids": false, 00:18:24.247 "transport_tos": 0, 00:18:24.247 "nvme_error_stat": false, 00:18:24.247 "rdma_srq_size": 0, 00:18:24.247 "io_path_stat": false, 00:18:24.247 "allow_accel_sequence": false, 00:18:24.247 "rdma_max_cq_size": 0, 00:18:24.247 "rdma_cm_event_timeout_ms": 0, 00:18:24.247 "dhchap_digests": [ 00:18:24.247 "sha256", 00:18:24.247 "sha384", 00:18:24.247 "sha512" 00:18:24.247 ], 00:18:24.247 "dhchap_dhgroups": [ 00:18:24.247 "null", 00:18:24.247 "ffdhe2048", 00:18:24.247 "ffdhe3072", 00:18:24.247 "ffdhe4096", 00:18:24.247 "ffdhe6144", 00:18:24.247 "ffdhe8192" 00:18:24.247 ] 00:18:24.247 } 00:18:24.247 }, 00:18:24.247 { 00:18:24.247 "method": "bdev_nvme_attach_controller", 00:18:24.247 "params": { 00:18:24.247 "name": "TLSTEST", 00:18:24.247 "trtype": "TCP", 00:18:24.247 "adrfam": "IPv4", 00:18:24.247 "traddr": "10.0.0.2", 00:18:24.247 "trsvcid": "4420", 00:18:24.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.247 "prchk_reftag": false, 00:18:24.247 "prchk_guard": false, 00:18:24.247 "ctrlr_loss_timeout_sec": 0, 00:18:24.247 "reconnect_delay_sec": 0, 00:18:24.247 "fast_io_fail_timeout_sec": 0, 00:18:24.247 "psk": "key0", 00:18:24.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.247 "hdgst": false, 00:18:24.247 "ddgst": false, 00:18:24.247 "multipath": "multipath" 00:18:24.247 } 00:18:24.247 }, 00:18:24.247 { 00:18:24.247 "method": "bdev_nvme_set_hotplug", 00:18:24.247 "params": { 00:18:24.247 "period_us": 100000, 00:18:24.247 "enable": false 00:18:24.247 } 00:18:24.247 }, 00:18:24.247 { 00:18:24.247 "method": "bdev_wait_for_examine" 00:18:24.247 } 00:18:24.247 ] 00:18:24.247 }, 00:18:24.247 { 00:18:24.247 "subsystem": "nbd", 00:18:24.247 "config": [] 00:18:24.247 } 00:18:24.247 ] 00:18:24.247 }' 00:18:24.247 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2344105 00:18:24.247 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2344105 ']' 00:18:24.247 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2344105 00:18:24.247 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:24.247 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:24.247 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2344105 00:18:24.507 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:24.507 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:24.507 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2344105' 00:18:24.507 killing process with pid 2344105 00:18:24.507 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2344105 00:18:24.507 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.507 00:18:24.507 Latency(us) 00:18:24.507 [2024-11-18T12:01:22.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.507 [2024-11-18T12:01:22.209Z] =================================================================================================================== 00:18:24.507 [2024-11-18T12:01:22.209Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:24.507 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2344105 00:18:24.507 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2343843 00:18:24.507 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2343843 ']' 00:18:24.507 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2343843 00:18:24.507 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:24.507 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:24.507 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2343843 00:18:24.507 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:24.507 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:24.507 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2343843' 00:18:24.507 killing process with pid 2343843 00:18:24.507 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2343843 00:18:24.507 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2343843 00:18:24.768 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:24.768 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:24.768 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:24.768 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:24.768 "subsystems": [ 00:18:24.768 { 00:18:24.768 "subsystem": "keyring", 00:18:24.768 "config": [ 00:18:24.768 { 00:18:24.768 "method": "keyring_file_add_key", 00:18:24.768 "params": { 00:18:24.768 "name": "key0", 00:18:24.768 "path": "/tmp/tmp.Brr5j5DZrC" 00:18:24.768 } 00:18:24.768 } 00:18:24.768 ] 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "subsystem": "iobuf", 00:18:24.768 "config": [ 00:18:24.768 { 00:18:24.768 "method": "iobuf_set_options", 00:18:24.768 "params": { 00:18:24.768 "small_pool_count": 8192, 00:18:24.768 "large_pool_count": 1024, 00:18:24.768 "small_bufsize": 8192, 00:18:24.768 "large_bufsize": 135168, 00:18:24.768 "enable_numa": false 00:18:24.768 } 00:18:24.768 } 00:18:24.768 ] 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "subsystem": "sock", 00:18:24.768 "config": [ 00:18:24.768 { 00:18:24.768 "method": "sock_set_default_impl", 00:18:24.768 "params": { 00:18:24.768 "impl_name": "posix" 00:18:24.768 } 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "method": "sock_impl_set_options", 00:18:24.768 "params": { 00:18:24.768 "impl_name": "ssl", 00:18:24.768 "recv_buf_size": 4096, 00:18:24.768 "send_buf_size": 4096, 00:18:24.768 "enable_recv_pipe": true, 00:18:24.768 "enable_quickack": false, 00:18:24.768 "enable_placement_id": 0, 00:18:24.768 "enable_zerocopy_send_server": true, 00:18:24.768 "enable_zerocopy_send_client": false, 00:18:24.768 "zerocopy_threshold": 0, 00:18:24.768 "tls_version": 0, 00:18:24.768 "enable_ktls": false 00:18:24.768 } 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "method": "sock_impl_set_options", 00:18:24.768 "params": { 00:18:24.768 "impl_name": "posix", 00:18:24.768 "recv_buf_size": 2097152, 00:18:24.768 "send_buf_size": 2097152, 00:18:24.768 "enable_recv_pipe": true, 00:18:24.768 "enable_quickack": false, 00:18:24.768 "enable_placement_id": 0, 00:18:24.768 "enable_zerocopy_send_server": true, 00:18:24.768 "enable_zerocopy_send_client": false, 00:18:24.768 "zerocopy_threshold": 0, 00:18:24.768 "tls_version": 0, 00:18:24.768 "enable_ktls": false 00:18:24.768 } 00:18:24.768 } 00:18:24.768 ] 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "subsystem": "vmd", 00:18:24.768 "config": [] 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "subsystem": "accel", 00:18:24.768 "config": [ 00:18:24.768 { 00:18:24.768 "method": "accel_set_options", 00:18:24.768 "params": { 00:18:24.768 "small_cache_size": 128, 00:18:24.768 "large_cache_size": 16, 00:18:24.768 "task_count": 2048, 00:18:24.768 "sequence_count": 2048, 00:18:24.768 "buf_count": 2048 00:18:24.768 } 00:18:24.768 } 00:18:24.768 ] 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "subsystem": "bdev", 00:18:24.768 "config": [ 00:18:24.768 { 00:18:24.768 "method": "bdev_set_options", 00:18:24.768 "params": { 00:18:24.768 "bdev_io_pool_size": 65535, 00:18:24.768 "bdev_io_cache_size": 256, 00:18:24.768 "bdev_auto_examine": true, 00:18:24.768 "iobuf_small_cache_size": 128, 00:18:24.768 "iobuf_large_cache_size": 16 00:18:24.768 } 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "method": "bdev_raid_set_options", 00:18:24.768 "params": { 00:18:24.768 "process_window_size_kb": 1024, 00:18:24.768 "process_max_bandwidth_mb_sec": 0 00:18:24.768 } 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "method": "bdev_iscsi_set_options", 00:18:24.768 "params": { 00:18:24.768 "timeout_sec": 30 00:18:24.768 } 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "method": "bdev_nvme_set_options", 00:18:24.768 "params": { 00:18:24.768 "action_on_timeout": "none", 00:18:24.768 "timeout_us": 0, 00:18:24.768 "timeout_admin_us": 0, 00:18:24.768 "keep_alive_timeout_ms": 10000, 00:18:24.768 "arbitration_burst": 0, 00:18:24.768 "low_priority_weight": 0, 00:18:24.768 "medium_priority_weight": 0, 00:18:24.768 "high_priority_weight": 0, 00:18:24.768 "nvme_adminq_poll_period_us": 10000, 00:18:24.768 "nvme_ioq_poll_period_us": 0, 00:18:24.768 "io_queue_requests": 0, 00:18:24.768 "delay_cmd_submit": true, 00:18:24.768 "transport_retry_count": 4, 00:18:24.768 "bdev_retry_count": 3, 00:18:24.768 "transport_ack_timeout": 0, 00:18:24.768 "ctrlr_loss_timeout_sec": 0, 00:18:24.768 "reconnect_delay_sec": 0, 00:18:24.768 "fast_io_fail_timeout_sec": 0, 00:18:24.768 "disable_auto_failback": false, 00:18:24.768 "generate_uuids": false, 00:18:24.768 "transport_tos": 0, 00:18:24.768 "nvme_error_stat": false, 00:18:24.768 "rdma_srq_size": 0, 00:18:24.768 "io_path_stat": false, 00:18:24.768 "allow_accel_sequence": false, 00:18:24.768 "rdma_max_cq_size": 0, 00:18:24.768 "rdma_cm_event_timeout_ms": 0, 00:18:24.768 "dhchap_digests": [ 00:18:24.768 "sha256", 00:18:24.768 "sha384", 00:18:24.768 "sha512" 00:18:24.768 ], 00:18:24.768 "dhchap_dhgroups": [ 00:18:24.768 "null", 00:18:24.768 "ffdhe2048", 00:18:24.768 "ffdhe3072", 00:18:24.768 "ffdhe4096", 00:18:24.768 "ffdhe6144", 00:18:24.768 "ffdhe8192" 00:18:24.768 ] 00:18:24.768 } 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "method": "bdev_nvme_set_hotplug", 00:18:24.768 "params": { 00:18:24.768 "period_us": 100000, 00:18:24.768 "enable": false 00:18:24.768 } 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "method": "bdev_malloc_create", 00:18:24.768 "params": { 00:18:24.768 "name": "malloc0", 00:18:24.768 "num_blocks": 8192, 00:18:24.768 "block_size": 4096, 00:18:24.768 "physical_block_size": 4096, 00:18:24.768 "uuid": "35c3bcc6-2172-4c26-b76c-a9fd02d5828e", 00:18:24.768 "optimal_io_boundary": 0, 00:18:24.768 "md_size": 0, 00:18:24.768 "dif_type": 0, 00:18:24.768 "dif_is_head_of_md": false, 00:18:24.768 "dif_pi_format": 0 00:18:24.768 } 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "method": "bdev_wait_for_examine" 00:18:24.768 } 00:18:24.768 ] 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "subsystem": "nbd", 00:18:24.768 "config": [] 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "subsystem": "scheduler", 00:18:24.768 "config": [ 00:18:24.768 { 00:18:24.768 "method": "framework_set_scheduler", 00:18:24.768 "params": { 00:18:24.768 "name": "static" 00:18:24.768 } 00:18:24.768 } 00:18:24.768 ] 00:18:24.768 }, 00:18:24.768 { 00:18:24.768 "subsystem": "nvmf", 00:18:24.768 "config": [ 00:18:24.768 { 00:18:24.768 "method": "nvmf_set_config", 00:18:24.768 "params": { 00:18:24.768 "discovery_filter": "match_any", 00:18:24.768 "admin_cmd_passthru": { 00:18:24.768 "identify_ctrlr": false 00:18:24.768 }, 00:18:24.768 "dhchap_digests": [ 00:18:24.769 "sha256", 00:18:24.769 "sha384", 00:18:24.769 "sha512" 00:18:24.769 ], 00:18:24.769 "dhchap_dhgroups": [ 00:18:24.769 "null", 00:18:24.769 "ffdhe2048", 00:18:24.769 "ffdhe3072", 00:18:24.769 "ffdhe4096", 00:18:24.769 "ffdhe6144", 00:18:24.769 "ffdhe8192" 00:18:24.769 ] 00:18:24.769 } 00:18:24.769 }, 00:18:24.769 { 00:18:24.769 "method": "nvmf_set_max_subsystems", 00:18:24.769 "params": { 00:18:24.769 "max_subsystems": 1024 00:18:24.769 } 00:18:24.769 }, 00:18:24.769 { 00:18:24.769 "method": "nvmf_set_crdt", 00:18:24.769 "params": { 00:18:24.769 "crdt1": 0, 00:18:24.769 "crdt2": 0, 00:18:24.769 "crdt3": 0 00:18:24.769 } 00:18:24.769 }, 00:18:24.769 { 00:18:24.769 "method": "nvmf_create_transport", 00:18:24.769 "params": { 00:18:24.769 "trtype": "TCP", 00:18:24.769 "max_queue_depth": 128, 00:18:24.769 "max_io_qpairs_per_ctrlr": 127, 00:18:24.769 "in_capsule_data_size": 4096, 00:18:24.769 "max_io_size": 131072, 00:18:24.769 "io_unit_size": 131072, 00:18:24.769 "max_aq_depth": 128, 00:18:24.769 "num_shared_buffers": 511, 00:18:24.769 "buf_cache_size": 4294967295, 00:18:24.769 "dif_insert_or_strip": false, 00:18:24.769 "zcopy": false, 00:18:24.769 "c2h_success": false, 00:18:24.769 "sock_priority": 0, 00:18:24.769 "abort_timeout_sec": 1, 00:18:24.769 "ack_timeout": 0, 00:18:24.769 "data_wr_pool_size": 0 00:18:24.769 } 00:18:24.769 }, 00:18:24.769 { 00:18:24.769 "method": "nvmf_create_subsystem", 00:18:24.769 "params": { 00:18:24.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.769 "allow_any_host": false, 00:18:24.769 "serial_number": "SPDK00000000000001", 00:18:24.769 "model_number": "SPDK bdev Controller", 00:18:24.769 "max_namespaces": 10, 00:18:24.769 "min_cntlid": 1, 00:18:24.769 "max_cntlid": 65519, 00:18:24.769 "ana_reporting": false 00:18:24.769 } 00:18:24.769 }, 00:18:24.769 { 00:18:24.769 "method": "nvmf_subsystem_add_host", 00:18:24.769 "params": { 00:18:24.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.769 "host": "nqn.2016-06.io.spdk:host1", 00:18:24.769 "psk": "key0" 00:18:24.769 } 00:18:24.769 }, 00:18:24.769 { 00:18:24.769 "method": "nvmf_subsystem_add_ns", 00:18:24.769 "params": { 00:18:24.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.769 "namespace": { 00:18:24.769 "nsid": 1, 00:18:24.769 "bdev_name": "malloc0", 00:18:24.769 "nguid": "35C3BCC621724C26B76CA9FD02D5828E", 00:18:24.769 "uuid": "35c3bcc6-2172-4c26-b76c-a9fd02d5828e", 00:18:24.769 "no_auto_visible": false 00:18:24.769 } 00:18:24.769 } 00:18:24.769 }, 00:18:24.769 { 00:18:24.769 "method": "nvmf_subsystem_add_listener", 00:18:24.769 "params": { 00:18:24.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.769 "listen_address": { 00:18:24.769 "trtype": "TCP", 00:18:24.769 "adrfam": "IPv4", 00:18:24.769 "traddr": "10.0.0.2", 00:18:24.769 "trsvcid": "4420" 00:18:24.769 }, 00:18:24.769 "secure_channel": true 00:18:24.769 } 00:18:24.769 } 00:18:24.769 ] 00:18:24.769 } 00:18:24.769 ] 00:18:24.769 }' 00:18:24.769 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.769 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2344459 00:18:24.769 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2344459 00:18:24.769 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:24.769 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2344459 ']' 00:18:24.769 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.769 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:24.769 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.769 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:24.769 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.769 [2024-11-18 13:01:22.384522] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:24.769 [2024-11-18 13:01:22.384567] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.769 [2024-11-18 13:01:22.462707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.029 [2024-11-18 13:01:22.506829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.029 [2024-11-18 13:01:22.506865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.029 [2024-11-18 13:01:22.506873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.029 [2024-11-18 13:01:22.506880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.029 [2024-11-18 13:01:22.506885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.029 [2024-11-18 13:01:22.507468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.029 [2024-11-18 13:01:22.718939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.288 [2024-11-18 13:01:22.750975] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:25.288 [2024-11-18 13:01:22.751174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.547 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:25.547 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:25.547 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:25.548 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:25.548 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.808 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.808 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2344593 00:18:25.808 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2344593 /var/tmp/bdevperf.sock 00:18:25.808 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2344593 ']' 00:18:25.808 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.808 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:25.808 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:25.808 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.808 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:25.808 "subsystems": [ 00:18:25.808 { 00:18:25.808 "subsystem": "keyring", 00:18:25.808 "config": [ 00:18:25.808 { 00:18:25.808 "method": "keyring_file_add_key", 00:18:25.808 "params": { 00:18:25.808 "name": "key0", 00:18:25.808 "path": "/tmp/tmp.Brr5j5DZrC" 00:18:25.808 } 00:18:25.808 } 00:18:25.808 ] 00:18:25.808 }, 00:18:25.808 { 00:18:25.808 "subsystem": "iobuf", 00:18:25.808 "config": [ 00:18:25.808 { 00:18:25.808 "method": "iobuf_set_options", 00:18:25.808 "params": { 00:18:25.808 "small_pool_count": 8192, 00:18:25.808 "large_pool_count": 1024, 00:18:25.808 "small_bufsize": 8192, 00:18:25.808 "large_bufsize": 135168, 00:18:25.808 "enable_numa": false 00:18:25.808 } 00:18:25.808 } 00:18:25.808 ] 00:18:25.808 }, 00:18:25.808 { 00:18:25.808 "subsystem": "sock", 00:18:25.808 "config": [ 00:18:25.808 { 00:18:25.808 "method": "sock_set_default_impl", 00:18:25.808 "params": { 00:18:25.808 "impl_name": "posix" 00:18:25.808 } 00:18:25.808 }, 00:18:25.808 { 00:18:25.808 "method": "sock_impl_set_options", 00:18:25.808 "params": { 00:18:25.808 "impl_name": "ssl", 00:18:25.808 "recv_buf_size": 4096, 00:18:25.808 "send_buf_size": 4096, 00:18:25.808 "enable_recv_pipe": true, 00:18:25.808 "enable_quickack": false, 00:18:25.808 "enable_placement_id": 0, 00:18:25.808 "enable_zerocopy_send_server": true, 00:18:25.808 "enable_zerocopy_send_client": false, 00:18:25.808 "zerocopy_threshold": 0, 00:18:25.808 "tls_version": 0, 00:18:25.808 "enable_ktls": false 00:18:25.808 } 00:18:25.808 }, 00:18:25.808 { 00:18:25.808 "method": "sock_impl_set_options", 00:18:25.808 "params": { 00:18:25.808 "impl_name": "posix", 00:18:25.808 "recv_buf_size": 2097152, 00:18:25.808 "send_buf_size": 2097152, 00:18:25.808 "enable_recv_pipe": true, 00:18:25.808 "enable_quickack": false, 00:18:25.808 "enable_placement_id": 0, 00:18:25.808 "enable_zerocopy_send_server": true, 00:18:25.808 "enable_zerocopy_send_client": false, 00:18:25.808 "zerocopy_threshold": 0, 00:18:25.808 "tls_version": 0, 00:18:25.808 "enable_ktls": false 00:18:25.808 } 00:18:25.808 } 00:18:25.808 ] 00:18:25.808 }, 00:18:25.808 { 00:18:25.808 "subsystem": "vmd", 00:18:25.808 "config": [] 00:18:25.808 }, 00:18:25.808 { 00:18:25.808 "subsystem": "accel", 00:18:25.808 "config": [ 00:18:25.808 { 00:18:25.808 "method": "accel_set_options", 00:18:25.808 "params": { 00:18:25.808 "small_cache_size": 128, 00:18:25.808 "large_cache_size": 16, 00:18:25.808 "task_count": 2048, 00:18:25.808 "sequence_count": 2048, 00:18:25.808 "buf_count": 2048 00:18:25.808 } 00:18:25.808 } 00:18:25.808 ] 00:18:25.808 }, 00:18:25.808 { 00:18:25.808 "subsystem": "bdev", 00:18:25.808 "config": [ 00:18:25.808 { 00:18:25.808 "method": "bdev_set_options", 00:18:25.808 "params": { 00:18:25.808 "bdev_io_pool_size": 65535, 00:18:25.808 "bdev_io_cache_size": 256, 00:18:25.808 "bdev_auto_examine": true, 00:18:25.808 "iobuf_small_cache_size": 128, 00:18:25.808 "iobuf_large_cache_size": 16 00:18:25.808 } 00:18:25.808 }, 00:18:25.808 { 00:18:25.808 "method": "bdev_raid_set_options", 00:18:25.808 "params": { 00:18:25.808 "process_window_size_kb": 1024, 00:18:25.808 "process_max_bandwidth_mb_sec": 0 00:18:25.808 } 00:18:25.808 }, 00:18:25.808 { 00:18:25.808 "method": "bdev_iscsi_set_options", 00:18:25.808 "params": { 00:18:25.808 "timeout_sec": 30 00:18:25.808 } 00:18:25.808 }, 00:18:25.808 { 00:18:25.808 "method": "bdev_nvme_set_options", 00:18:25.808 "params": { 00:18:25.808 "action_on_timeout": "none", 00:18:25.808 "timeout_us": 0, 00:18:25.808 "timeout_admin_us": 0, 00:18:25.808 "keep_alive_timeout_ms": 10000, 00:18:25.808 "arbitration_burst": 0, 00:18:25.808 "low_priority_weight": 0, 00:18:25.809 "medium_priority_weight": 0, 00:18:25.809 "high_priority_weight": 0, 00:18:25.809 "nvme_adminq_poll_period_us": 10000, 00:18:25.809 "nvme_ioq_poll_period_us": 0, 00:18:25.809 "io_queue_requests": 512, 00:18:25.809 "delay_cmd_submit": true, 00:18:25.809 "transport_retry_count": 4, 00:18:25.809 "bdev_retry_count": 3, 00:18:25.809 "transport_ack_timeout": 0, 00:18:25.809 "ctrlr_loss_timeout_sec": 0, 00:18:25.809 "reconnect_delay_sec": 0, 00:18:25.809 "fast_io_fail_timeout_sec": 0, 00:18:25.809 "disable_auto_failback": false, 00:18:25.809 "generate_uuids": false, 00:18:25.809 "transport_tos": 0, 00:18:25.809 "nvme_error_stat": false, 00:18:25.809 "rdma_srq_size": 0, 00:18:25.809 "io_path_stat": false, 00:18:25.809 "allow_accel_sequence": false, 00:18:25.809 "rdma_max_cq_size": 0, 00:18:25.809 "rdma_cm_event_timeout_ms": 0, 00:18:25.809 "dhchap_digests": [ 00:18:25.809 "sha256", 00:18:25.809 "sha384", 00:18:25.809 "sha512" 00:18:25.809 ], 00:18:25.809 "dhchap_dhgroups": [ 00:18:25.809 "null", 00:18:25.809 "ffdhe2048", 00:18:25.809 "ffdhe3072", 00:18:25.809 "ffdhe4096", 00:18:25.809 "ffdhe6144", 00:18:25.809 "ffdhe8192" 00:18:25.809 ] 00:18:25.809 } 00:18:25.809 }, 00:18:25.809 { 00:18:25.809 "method": "bdev_nvme_attach_controller", 00:18:25.809 "params": { 00:18:25.809 "name": "TLSTEST", 00:18:25.809 "trtype": "TCP", 00:18:25.809 "adrfam": "IPv4", 00:18:25.809 "traddr": "10.0.0.2", 00:18:25.809 "trsvcid": "4420", 00:18:25.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.809 "prchk_reftag": false, 00:18:25.809 "prchk_guard": false, 00:18:25.809 "ctrlr_loss_timeout_sec": 0, 00:18:25.809 "reconnect_delay_sec": 0, 00:18:25.809 "fast_io_fail_timeout_sec": 0, 00:18:25.809 "psk": "key0", 00:18:25.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:25.809 "hdgst": false, 00:18:25.809 "ddgst": false, 00:18:25.809 "multipath": "multipath" 00:18:25.809 } 00:18:25.809 }, 00:18:25.809 { 00:18:25.809 "method": "bdev_nvme_set_hotplug", 00:18:25.809 "params": { 00:18:25.809 "period_us": 100000, 00:18:25.809 "enable": false 00:18:25.809 } 00:18:25.809 }, 00:18:25.809 { 00:18:25.809 "method": "bdev_wait_for_examine" 00:18:25.809 } 00:18:25.809 ] 00:18:25.809 }, 00:18:25.809 { 00:18:25.809 "subsystem": "nbd", 00:18:25.809 "config": [] 00:18:25.809 } 00:18:25.809 ] 00:18:25.809 }' 00:18:25.809 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:25.809 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.809 [2024-11-18 13:01:23.320326] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:25.809 [2024-11-18 13:01:23.320386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2344593 ] 00:18:25.809 [2024-11-18 13:01:23.388997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.809 [2024-11-18 13:01:23.429713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.069 [2024-11-18 13:01:23.582936] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:26.638 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:26.638 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:26.638 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:26.638 Running I/O for 10 seconds... 00:18:28.955 5279.00 IOPS, 20.62 MiB/s [2024-11-18T12:01:27.596Z] 5340.00 IOPS, 20.86 MiB/s [2024-11-18T12:01:28.534Z] 5383.67 IOPS, 21.03 MiB/s [2024-11-18T12:01:29.472Z] 5351.25 IOPS, 20.90 MiB/s [2024-11-18T12:01:30.409Z] 5381.80 IOPS, 21.02 MiB/s [2024-11-18T12:01:31.348Z] 5393.50 IOPS, 21.07 MiB/s [2024-11-18T12:01:32.286Z] 5393.00 IOPS, 21.07 MiB/s [2024-11-18T12:01:33.665Z] 5411.62 IOPS, 21.14 MiB/s [2024-11-18T12:01:34.605Z] 5417.89 IOPS, 21.16 MiB/s [2024-11-18T12:01:34.605Z] 5369.00 IOPS, 20.97 MiB/s 00:18:36.903 Latency(us) 00:18:36.903 [2024-11-18T12:01:34.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.903 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:36.903 Verification LBA range: start 0x0 length 0x2000 00:18:36.903 TLSTESTn1 : 10.01 5373.67 20.99 0.00 0.00 23782.54 4872.46 30317.52 00:18:36.903 [2024-11-18T12:01:34.605Z] =================================================================================================================== 00:18:36.903 [2024-11-18T12:01:34.605Z] Total : 5373.67 20.99 0.00 0.00 23782.54 4872.46 30317.52 00:18:36.903 { 00:18:36.903 "results": [ 00:18:36.903 { 00:18:36.903 "job": "TLSTESTn1", 00:18:36.903 "core_mask": "0x4", 00:18:36.903 "workload": "verify", 00:18:36.903 "status": "finished", 00:18:36.903 "verify_range": { 00:18:36.903 "start": 0, 00:18:36.903 "length": 8192 00:18:36.903 }, 00:18:36.903 "queue_depth": 128, 00:18:36.903 "io_size": 4096, 00:18:36.903 "runtime": 10.014937, 00:18:36.903 "iops": 5373.673344125879, 00:18:36.903 "mibps": 20.990911500491716, 00:18:36.903 "io_failed": 0, 00:18:36.903 "io_timeout": 0, 00:18:36.903 "avg_latency_us": 23782.538739027834, 00:18:36.903 "min_latency_us": 4872.459130434782, 00:18:36.903 "max_latency_us": 30317.52347826087 00:18:36.903 } 00:18:36.903 ], 00:18:36.903 "core_count": 1 00:18:36.903 } 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2344593 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2344593 ']' 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2344593 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2344593 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2344593' 00:18:36.903 killing process with pid 2344593 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2344593 00:18:36.903 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.903 00:18:36.903 Latency(us) 00:18:36.903 [2024-11-18T12:01:34.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.903 [2024-11-18T12:01:34.605Z] =================================================================================================================== 00:18:36.903 [2024-11-18T12:01:34.605Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2344593 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2344459 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2344459 ']' 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2344459 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2344459 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2344459' 00:18:36.903 killing process with pid 2344459 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2344459 00:18:36.903 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2344459 00:18:37.163 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:37.163 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:37.163 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:37.163 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.163 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2346435 00:18:37.163 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:37.163 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2346435 00:18:37.163 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2346435 ']' 00:18:37.163 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.163 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:37.163 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.163 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:37.163 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.163 [2024-11-18 13:01:34.809647] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:37.163 [2024-11-18 13:01:34.809699] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.423 [2024-11-18 13:01:34.886244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.423 [2024-11-18 13:01:34.923596] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.423 [2024-11-18 13:01:34.923631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.423 [2024-11-18 13:01:34.923638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.423 [2024-11-18 13:01:34.923644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.423 [2024-11-18 13:01:34.923649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.423 [2024-11-18 13:01:34.924194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.423 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:37.423 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:37.423 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:37.423 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:37.423 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.423 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.423 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Brr5j5DZrC 00:18:37.423 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Brr5j5DZrC 00:18:37.423 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:37.682 [2024-11-18 13:01:35.235847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.682 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:37.942 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:37.942 [2024-11-18 13:01:35.640894] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:38.202 [2024-11-18 13:01:35.641092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.202 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:38.202 malloc0 00:18:38.202 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:38.462 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Brr5j5DZrC 00:18:38.722 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:38.982 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2346710 00:18:38.982 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:38.982 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:38.982 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2346710 /var/tmp/bdevperf.sock 00:18:38.982 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2346710 ']' 00:18:38.982 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.982 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:38.982 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.982 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:38.982 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.982 [2024-11-18 13:01:36.491423] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:38.982 [2024-11-18 13:01:36.491475] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2346710 ] 00:18:38.982 [2024-11-18 13:01:36.566492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.982 [2024-11-18 13:01:36.607814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.241 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:39.241 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:39.241 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Brr5j5DZrC 00:18:39.241 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:39.499 [2024-11-18 13:01:37.088137] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:39.499 nvme0n1 00:18:39.499 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:39.758 Running I/O for 1 seconds... 00:18:40.696 5163.00 IOPS, 20.17 MiB/s 00:18:40.696 Latency(us) 00:18:40.696 [2024-11-18T12:01:38.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.696 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:40.696 Verification LBA range: start 0x0 length 0x2000 00:18:40.696 nvme0n1 : 1.03 5114.39 19.98 0.00 0.00 24732.10 4815.47 41031.23 00:18:40.696 [2024-11-18T12:01:38.398Z] =================================================================================================================== 00:18:40.696 [2024-11-18T12:01:38.398Z] Total : 5114.39 19.98 0.00 0.00 24732.10 4815.47 41031.23 00:18:40.696 { 00:18:40.696 "results": [ 00:18:40.696 { 00:18:40.696 "job": "nvme0n1", 00:18:40.696 "core_mask": "0x2", 00:18:40.696 "workload": "verify", 00:18:40.696 "status": "finished", 00:18:40.696 "verify_range": { 00:18:40.696 "start": 0, 00:18:40.696 "length": 8192 00:18:40.696 }, 00:18:40.696 "queue_depth": 128, 00:18:40.696 "io_size": 4096, 00:18:40.696 "runtime": 1.034531, 00:18:40.696 "iops": 5114.394832054332, 00:18:40.696 "mibps": 19.978104812712235, 00:18:40.696 "io_failed": 0, 00:18:40.696 "io_timeout": 0, 00:18:40.696 "avg_latency_us": 24732.09989859729, 00:18:40.696 "min_latency_us": 4815.471304347826, 00:18:40.696 "max_latency_us": 41031.2347826087 00:18:40.696 } 00:18:40.696 ], 00:18:40.696 "core_count": 1 00:18:40.696 } 00:18:40.696 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2346710 00:18:40.696 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2346710 ']' 00:18:40.696 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2346710 00:18:40.696 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:40.696 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:40.696 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2346710 00:18:40.696 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:40.696 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:40.696 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2346710' 00:18:40.696 killing process with pid 2346710 00:18:40.696 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2346710 00:18:40.696 Received shutdown signal, test time was about 1.000000 seconds 00:18:40.696 00:18:40.696 Latency(us) 00:18:40.696 [2024-11-18T12:01:38.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.696 [2024-11-18T12:01:38.398Z] =================================================================================================================== 00:18:40.696 [2024-11-18T12:01:38.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.696 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2346710 00:18:40.956 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2346435 00:18:40.956 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2346435 ']' 00:18:40.956 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2346435 00:18:40.956 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:40.956 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:40.956 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2346435 00:18:40.956 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:40.956 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:40.956 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2346435' 00:18:40.956 killing process with pid 2346435 00:18:40.956 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2346435 00:18:40.956 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2346435 00:18:41.216 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:41.216 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:41.216 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:41.216 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.216 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2347161 00:18:41.216 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:41.216 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2347161 00:18:41.216 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2347161 ']' 00:18:41.216 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.216 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:41.216 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.216 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:41.216 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.216 [2024-11-18 13:01:38.830134] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:41.216 [2024-11-18 13:01:38.830181] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.216 [2024-11-18 13:01:38.909720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.475 [2024-11-18 13:01:38.948147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.475 [2024-11-18 13:01:38.948182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.475 [2024-11-18 13:01:38.948189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.476 [2024-11-18 13:01:38.948195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.476 [2024-11-18 13:01:38.948200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.476 [2024-11-18 13:01:38.948650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.476 [2024-11-18 13:01:39.095779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.476 malloc0 00:18:41.476 [2024-11-18 13:01:39.123922] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:41.476 [2024-11-18 13:01:39.124132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2347191 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2347191 /var/tmp/bdevperf.sock 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2347191 ']' 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:41.476 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.734 [2024-11-18 13:01:39.199935] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:41.734 [2024-11-18 13:01:39.199977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2347191 ] 00:18:41.734 [2024-11-18 13:01:39.275080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.734 [2024-11-18 13:01:39.317304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.734 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:41.734 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:41.734 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Brr5j5DZrC 00:18:41.992 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:42.252 [2024-11-18 13:01:39.764940] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.252 nvme0n1 00:18:42.252 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:42.252 Running I/O for 1 seconds... 00:18:43.632 5275.00 IOPS, 20.61 MiB/s 00:18:43.632 Latency(us) 00:18:43.632 [2024-11-18T12:01:41.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.632 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:43.632 Verification LBA range: start 0x0 length 0x2000 00:18:43.632 nvme0n1 : 1.02 5320.41 20.78 0.00 0.00 23897.52 5727.28 21883.33 00:18:43.632 [2024-11-18T12:01:41.334Z] =================================================================================================================== 00:18:43.632 [2024-11-18T12:01:41.334Z] Total : 5320.41 20.78 0.00 0.00 23897.52 5727.28 21883.33 00:18:43.632 { 00:18:43.632 "results": [ 00:18:43.632 { 00:18:43.632 "job": "nvme0n1", 00:18:43.632 "core_mask": "0x2", 00:18:43.632 "workload": "verify", 00:18:43.632 "status": "finished", 00:18:43.632 "verify_range": { 00:18:43.632 "start": 0, 00:18:43.632 "length": 8192 00:18:43.632 }, 00:18:43.632 "queue_depth": 128, 00:18:43.632 "io_size": 4096, 00:18:43.632 "runtime": 1.015711, 00:18:43.632 "iops": 5320.411022426655, 00:18:43.632 "mibps": 20.78285555635412, 00:18:43.632 "io_failed": 0, 00:18:43.632 "io_timeout": 0, 00:18:43.632 "avg_latency_us": 23897.52165030734, 00:18:43.632 "min_latency_us": 5727.276521739131, 00:18:43.632 "max_latency_us": 21883.325217391306 00:18:43.632 } 00:18:43.632 ], 00:18:43.632 "core_count": 1 00:18:43.632 } 00:18:43.632 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:43.632 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.632 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.632 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.632 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:43.632 "subsystems": [ 00:18:43.632 { 00:18:43.632 "subsystem": "keyring", 00:18:43.632 "config": [ 00:18:43.632 { 00:18:43.632 "method": "keyring_file_add_key", 00:18:43.632 "params": { 00:18:43.632 "name": "key0", 00:18:43.632 "path": "/tmp/tmp.Brr5j5DZrC" 00:18:43.632 } 00:18:43.632 } 00:18:43.632 ] 00:18:43.632 }, 00:18:43.632 { 00:18:43.632 "subsystem": "iobuf", 00:18:43.632 "config": [ 00:18:43.632 { 00:18:43.632 "method": "iobuf_set_options", 00:18:43.632 "params": { 00:18:43.632 "small_pool_count": 8192, 00:18:43.632 "large_pool_count": 1024, 00:18:43.632 "small_bufsize": 8192, 00:18:43.632 "large_bufsize": 135168, 00:18:43.632 "enable_numa": false 00:18:43.632 } 00:18:43.632 } 00:18:43.632 ] 00:18:43.632 }, 00:18:43.632 { 00:18:43.632 "subsystem": "sock", 00:18:43.632 "config": [ 00:18:43.632 { 00:18:43.632 "method": "sock_set_default_impl", 00:18:43.632 "params": { 00:18:43.632 "impl_name": "posix" 00:18:43.632 } 00:18:43.632 }, 00:18:43.632 { 00:18:43.632 "method": "sock_impl_set_options", 00:18:43.632 "params": { 00:18:43.632 "impl_name": "ssl", 00:18:43.632 "recv_buf_size": 4096, 00:18:43.632 "send_buf_size": 4096, 00:18:43.632 "enable_recv_pipe": true, 00:18:43.632 "enable_quickack": false, 00:18:43.632 "enable_placement_id": 0, 00:18:43.632 "enable_zerocopy_send_server": true, 00:18:43.632 "enable_zerocopy_send_client": false, 00:18:43.632 "zerocopy_threshold": 0, 00:18:43.632 "tls_version": 0, 00:18:43.632 "enable_ktls": false 00:18:43.632 } 00:18:43.632 }, 00:18:43.632 { 00:18:43.632 "method": "sock_impl_set_options", 00:18:43.632 "params": { 00:18:43.632 "impl_name": "posix", 00:18:43.632 "recv_buf_size": 2097152, 00:18:43.632 "send_buf_size": 2097152, 00:18:43.632 "enable_recv_pipe": true, 00:18:43.632 "enable_quickack": false, 00:18:43.632 "enable_placement_id": 0, 00:18:43.632 "enable_zerocopy_send_server": true, 00:18:43.632 "enable_zerocopy_send_client": false, 00:18:43.632 "zerocopy_threshold": 0, 00:18:43.632 "tls_version": 0, 00:18:43.632 "enable_ktls": false 00:18:43.632 } 00:18:43.632 } 00:18:43.632 ] 00:18:43.632 }, 00:18:43.632 { 00:18:43.632 "subsystem": "vmd", 00:18:43.632 "config": [] 00:18:43.632 }, 00:18:43.632 { 00:18:43.632 "subsystem": "accel", 00:18:43.632 "config": [ 00:18:43.632 { 00:18:43.632 "method": "accel_set_options", 00:18:43.632 "params": { 00:18:43.632 "small_cache_size": 128, 00:18:43.632 "large_cache_size": 16, 00:18:43.632 "task_count": 2048, 00:18:43.632 "sequence_count": 2048, 00:18:43.632 "buf_count": 2048 00:18:43.632 } 00:18:43.632 } 00:18:43.632 ] 00:18:43.632 }, 00:18:43.632 { 00:18:43.632 "subsystem": "bdev", 00:18:43.632 "config": [ 00:18:43.632 { 00:18:43.632 "method": "bdev_set_options", 00:18:43.632 "params": { 00:18:43.632 "bdev_io_pool_size": 65535, 00:18:43.632 "bdev_io_cache_size": 256, 00:18:43.632 "bdev_auto_examine": true, 00:18:43.632 "iobuf_small_cache_size": 128, 00:18:43.632 "iobuf_large_cache_size": 16 00:18:43.632 } 00:18:43.632 }, 00:18:43.632 { 00:18:43.632 "method": "bdev_raid_set_options", 00:18:43.632 "params": { 00:18:43.632 "process_window_size_kb": 1024, 00:18:43.632 "process_max_bandwidth_mb_sec": 0 00:18:43.633 } 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "method": "bdev_iscsi_set_options", 00:18:43.633 "params": { 00:18:43.633 "timeout_sec": 30 00:18:43.633 } 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "method": "bdev_nvme_set_options", 00:18:43.633 "params": { 00:18:43.633 "action_on_timeout": "none", 00:18:43.633 "timeout_us": 0, 00:18:43.633 "timeout_admin_us": 0, 00:18:43.633 "keep_alive_timeout_ms": 10000, 00:18:43.633 "arbitration_burst": 0, 00:18:43.633 "low_priority_weight": 0, 00:18:43.633 "medium_priority_weight": 0, 00:18:43.633 "high_priority_weight": 0, 00:18:43.633 "nvme_adminq_poll_period_us": 10000, 00:18:43.633 "nvme_ioq_poll_period_us": 0, 00:18:43.633 "io_queue_requests": 0, 00:18:43.633 "delay_cmd_submit": true, 00:18:43.633 "transport_retry_count": 4, 00:18:43.633 "bdev_retry_count": 3, 00:18:43.633 "transport_ack_timeout": 0, 00:18:43.633 "ctrlr_loss_timeout_sec": 0, 00:18:43.633 "reconnect_delay_sec": 0, 00:18:43.633 "fast_io_fail_timeout_sec": 0, 00:18:43.633 "disable_auto_failback": false, 00:18:43.633 "generate_uuids": false, 00:18:43.633 "transport_tos": 0, 00:18:43.633 "nvme_error_stat": false, 00:18:43.633 "rdma_srq_size": 0, 00:18:43.633 "io_path_stat": false, 00:18:43.633 "allow_accel_sequence": false, 00:18:43.633 "rdma_max_cq_size": 0, 00:18:43.633 "rdma_cm_event_timeout_ms": 0, 00:18:43.633 "dhchap_digests": [ 00:18:43.633 "sha256", 00:18:43.633 "sha384", 00:18:43.633 "sha512" 00:18:43.633 ], 00:18:43.633 "dhchap_dhgroups": [ 00:18:43.633 "null", 00:18:43.633 "ffdhe2048", 00:18:43.633 "ffdhe3072", 00:18:43.633 "ffdhe4096", 00:18:43.633 "ffdhe6144", 00:18:43.633 "ffdhe8192" 00:18:43.633 ] 00:18:43.633 } 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "method": "bdev_nvme_set_hotplug", 00:18:43.633 "params": { 00:18:43.633 "period_us": 100000, 00:18:43.633 "enable": false 00:18:43.633 } 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "method": "bdev_malloc_create", 00:18:43.633 "params": { 00:18:43.633 "name": "malloc0", 00:18:43.633 "num_blocks": 8192, 00:18:43.633 "block_size": 4096, 00:18:43.633 "physical_block_size": 4096, 00:18:43.633 "uuid": "62f484f5-1ef0-4421-9269-f5aa29528813", 00:18:43.633 "optimal_io_boundary": 0, 00:18:43.633 "md_size": 0, 00:18:43.633 "dif_type": 0, 00:18:43.633 "dif_is_head_of_md": false, 00:18:43.633 "dif_pi_format": 0 00:18:43.633 } 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "method": "bdev_wait_for_examine" 00:18:43.633 } 00:18:43.633 ] 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "subsystem": "nbd", 00:18:43.633 "config": [] 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "subsystem": "scheduler", 00:18:43.633 "config": [ 00:18:43.633 { 00:18:43.633 "method": "framework_set_scheduler", 00:18:43.633 "params": { 00:18:43.633 "name": "static" 00:18:43.633 } 00:18:43.633 } 00:18:43.633 ] 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "subsystem": "nvmf", 00:18:43.633 "config": [ 00:18:43.633 { 00:18:43.633 "method": "nvmf_set_config", 00:18:43.633 "params": { 00:18:43.633 "discovery_filter": "match_any", 00:18:43.633 "admin_cmd_passthru": { 00:18:43.633 "identify_ctrlr": false 00:18:43.633 }, 00:18:43.633 "dhchap_digests": [ 00:18:43.633 "sha256", 00:18:43.633 "sha384", 00:18:43.633 "sha512" 00:18:43.633 ], 00:18:43.633 "dhchap_dhgroups": [ 00:18:43.633 "null", 00:18:43.633 "ffdhe2048", 00:18:43.633 "ffdhe3072", 00:18:43.633 "ffdhe4096", 00:18:43.633 "ffdhe6144", 00:18:43.633 "ffdhe8192" 00:18:43.633 ] 00:18:43.633 } 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "method": "nvmf_set_max_subsystems", 00:18:43.633 "params": { 00:18:43.633 "max_subsystems": 1024 00:18:43.633 } 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "method": "nvmf_set_crdt", 00:18:43.633 "params": { 00:18:43.633 "crdt1": 0, 00:18:43.633 "crdt2": 0, 00:18:43.633 "crdt3": 0 00:18:43.633 } 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "method": "nvmf_create_transport", 00:18:43.633 "params": { 00:18:43.633 "trtype": "TCP", 00:18:43.633 "max_queue_depth": 128, 00:18:43.633 "max_io_qpairs_per_ctrlr": 127, 00:18:43.633 "in_capsule_data_size": 4096, 00:18:43.633 "max_io_size": 131072, 00:18:43.633 "io_unit_size": 131072, 00:18:43.633 "max_aq_depth": 128, 00:18:43.633 "num_shared_buffers": 511, 00:18:43.633 "buf_cache_size": 4294967295, 00:18:43.633 "dif_insert_or_strip": false, 00:18:43.633 "zcopy": false, 00:18:43.633 "c2h_success": false, 00:18:43.633 "sock_priority": 0, 00:18:43.633 "abort_timeout_sec": 1, 00:18:43.633 "ack_timeout": 0, 00:18:43.633 "data_wr_pool_size": 0 00:18:43.633 } 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "method": "nvmf_create_subsystem", 00:18:43.633 "params": { 00:18:43.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.633 "allow_any_host": false, 00:18:43.633 "serial_number": "00000000000000000000", 00:18:43.633 "model_number": "SPDK bdev Controller", 00:18:43.633 "max_namespaces": 32, 00:18:43.633 "min_cntlid": 1, 00:18:43.633 "max_cntlid": 65519, 00:18:43.633 "ana_reporting": false 00:18:43.633 } 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "method": "nvmf_subsystem_add_host", 00:18:43.633 "params": { 00:18:43.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.633 "host": "nqn.2016-06.io.spdk:host1", 00:18:43.633 "psk": "key0" 00:18:43.633 } 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "method": "nvmf_subsystem_add_ns", 00:18:43.633 "params": { 00:18:43.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.633 "namespace": { 00:18:43.633 "nsid": 1, 00:18:43.633 "bdev_name": "malloc0", 00:18:43.633 "nguid": "62F484F51EF044219269F5AA29528813", 00:18:43.633 "uuid": "62f484f5-1ef0-4421-9269-f5aa29528813", 00:18:43.633 "no_auto_visible": false 00:18:43.633 } 00:18:43.633 } 00:18:43.633 }, 00:18:43.633 { 00:18:43.633 "method": "nvmf_subsystem_add_listener", 00:18:43.633 "params": { 00:18:43.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.633 "listen_address": { 00:18:43.633 "trtype": "TCP", 00:18:43.633 "adrfam": "IPv4", 00:18:43.633 "traddr": "10.0.0.2", 00:18:43.633 "trsvcid": "4420" 00:18:43.633 }, 00:18:43.633 "secure_channel": false, 00:18:43.633 "sock_impl": "ssl" 00:18:43.633 } 00:18:43.633 } 00:18:43.633 ] 00:18:43.633 } 00:18:43.633 ] 00:18:43.633 }' 00:18:43.633 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:43.893 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:43.893 "subsystems": [ 00:18:43.893 { 00:18:43.893 "subsystem": "keyring", 00:18:43.893 "config": [ 00:18:43.893 { 00:18:43.893 "method": "keyring_file_add_key", 00:18:43.893 "params": { 00:18:43.893 "name": "key0", 00:18:43.893 "path": "/tmp/tmp.Brr5j5DZrC" 00:18:43.893 } 00:18:43.893 } 00:18:43.893 ] 00:18:43.893 }, 00:18:43.893 { 00:18:43.893 "subsystem": "iobuf", 00:18:43.893 "config": [ 00:18:43.893 { 00:18:43.893 "method": "iobuf_set_options", 00:18:43.893 "params": { 00:18:43.893 "small_pool_count": 8192, 00:18:43.893 "large_pool_count": 1024, 00:18:43.893 "small_bufsize": 8192, 00:18:43.893 "large_bufsize": 135168, 00:18:43.893 "enable_numa": false 00:18:43.893 } 00:18:43.893 } 00:18:43.893 ] 00:18:43.893 }, 00:18:43.893 { 00:18:43.893 "subsystem": "sock", 00:18:43.893 "config": [ 00:18:43.893 { 00:18:43.893 "method": "sock_set_default_impl", 00:18:43.893 "params": { 00:18:43.893 "impl_name": "posix" 00:18:43.893 } 00:18:43.893 }, 00:18:43.893 { 00:18:43.893 "method": "sock_impl_set_options", 00:18:43.893 "params": { 00:18:43.893 "impl_name": "ssl", 00:18:43.893 "recv_buf_size": 4096, 00:18:43.893 "send_buf_size": 4096, 00:18:43.893 "enable_recv_pipe": true, 00:18:43.893 "enable_quickack": false, 00:18:43.893 "enable_placement_id": 0, 00:18:43.893 "enable_zerocopy_send_server": true, 00:18:43.893 "enable_zerocopy_send_client": false, 00:18:43.893 "zerocopy_threshold": 0, 00:18:43.894 "tls_version": 0, 00:18:43.894 "enable_ktls": false 00:18:43.894 } 00:18:43.894 }, 00:18:43.894 { 00:18:43.894 "method": "sock_impl_set_options", 00:18:43.894 "params": { 00:18:43.894 "impl_name": "posix", 00:18:43.894 "recv_buf_size": 2097152, 00:18:43.894 "send_buf_size": 2097152, 00:18:43.894 "enable_recv_pipe": true, 00:18:43.894 "enable_quickack": false, 00:18:43.894 "enable_placement_id": 0, 00:18:43.894 "enable_zerocopy_send_server": true, 00:18:43.894 "enable_zerocopy_send_client": false, 00:18:43.894 "zerocopy_threshold": 0, 00:18:43.894 "tls_version": 0, 00:18:43.894 "enable_ktls": false 00:18:43.894 } 00:18:43.894 } 00:18:43.894 ] 00:18:43.894 }, 00:18:43.894 { 00:18:43.894 "subsystem": "vmd", 00:18:43.894 "config": [] 00:18:43.894 }, 00:18:43.894 { 00:18:43.894 "subsystem": "accel", 00:18:43.894 "config": [ 00:18:43.894 { 00:18:43.894 "method": "accel_set_options", 00:18:43.894 "params": { 00:18:43.894 "small_cache_size": 128, 00:18:43.894 "large_cache_size": 16, 00:18:43.894 "task_count": 2048, 00:18:43.894 "sequence_count": 2048, 00:18:43.894 "buf_count": 2048 00:18:43.894 } 00:18:43.894 } 00:18:43.894 ] 00:18:43.894 }, 00:18:43.894 { 00:18:43.894 "subsystem": "bdev", 00:18:43.894 "config": [ 00:18:43.894 { 00:18:43.894 "method": "bdev_set_options", 00:18:43.894 "params": { 00:18:43.894 "bdev_io_pool_size": 65535, 00:18:43.894 "bdev_io_cache_size": 256, 00:18:43.894 "bdev_auto_examine": true, 00:18:43.894 "iobuf_small_cache_size": 128, 00:18:43.894 "iobuf_large_cache_size": 16 00:18:43.894 } 00:18:43.894 }, 00:18:43.894 { 00:18:43.894 "method": "bdev_raid_set_options", 00:18:43.894 "params": { 00:18:43.894 "process_window_size_kb": 1024, 00:18:43.894 "process_max_bandwidth_mb_sec": 0 00:18:43.894 } 00:18:43.894 }, 00:18:43.894 { 00:18:43.894 "method": "bdev_iscsi_set_options", 00:18:43.894 "params": { 00:18:43.894 "timeout_sec": 30 00:18:43.894 } 00:18:43.894 }, 00:18:43.894 { 00:18:43.894 "method": "bdev_nvme_set_options", 00:18:43.894 "params": { 00:18:43.894 "action_on_timeout": "none", 00:18:43.894 "timeout_us": 0, 00:18:43.894 "timeout_admin_us": 0, 00:18:43.894 "keep_alive_timeout_ms": 10000, 00:18:43.894 "arbitration_burst": 0, 00:18:43.894 "low_priority_weight": 0, 00:18:43.894 "medium_priority_weight": 0, 00:18:43.894 "high_priority_weight": 0, 00:18:43.894 "nvme_adminq_poll_period_us": 10000, 00:18:43.894 "nvme_ioq_poll_period_us": 0, 00:18:43.894 "io_queue_requests": 512, 00:18:43.894 "delay_cmd_submit": true, 00:18:43.894 "transport_retry_count": 4, 00:18:43.894 "bdev_retry_count": 3, 00:18:43.894 "transport_ack_timeout": 0, 00:18:43.894 "ctrlr_loss_timeout_sec": 0, 00:18:43.894 "reconnect_delay_sec": 0, 00:18:43.894 "fast_io_fail_timeout_sec": 0, 00:18:43.894 "disable_auto_failback": false, 00:18:43.894 "generate_uuids": false, 00:18:43.894 "transport_tos": 0, 00:18:43.894 "nvme_error_stat": false, 00:18:43.894 "rdma_srq_size": 0, 00:18:43.894 "io_path_stat": false, 00:18:43.894 "allow_accel_sequence": false, 00:18:43.894 "rdma_max_cq_size": 0, 00:18:43.894 "rdma_cm_event_timeout_ms": 0, 00:18:43.894 "dhchap_digests": [ 00:18:43.894 "sha256", 00:18:43.894 "sha384", 00:18:43.894 "sha512" 00:18:43.894 ], 00:18:43.894 "dhchap_dhgroups": [ 00:18:43.894 "null", 00:18:43.894 "ffdhe2048", 00:18:43.894 "ffdhe3072", 00:18:43.894 "ffdhe4096", 00:18:43.894 "ffdhe6144", 00:18:43.894 "ffdhe8192" 00:18:43.894 ] 00:18:43.894 } 00:18:43.894 }, 00:18:43.894 { 00:18:43.894 "method": "bdev_nvme_attach_controller", 00:18:43.894 "params": { 00:18:43.894 "name": "nvme0", 00:18:43.894 "trtype": "TCP", 00:18:43.894 "adrfam": "IPv4", 00:18:43.894 "traddr": "10.0.0.2", 00:18:43.894 "trsvcid": "4420", 00:18:43.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.894 "prchk_reftag": false, 00:18:43.894 "prchk_guard": false, 00:18:43.894 "ctrlr_loss_timeout_sec": 0, 00:18:43.894 "reconnect_delay_sec": 0, 00:18:43.894 "fast_io_fail_timeout_sec": 0, 00:18:43.894 "psk": "key0", 00:18:43.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.894 "hdgst": false, 00:18:43.894 "ddgst": false, 00:18:43.894 "multipath": "multipath" 00:18:43.894 } 00:18:43.894 }, 00:18:43.894 { 00:18:43.894 "method": "bdev_nvme_set_hotplug", 00:18:43.894 "params": { 00:18:43.894 "period_us": 100000, 00:18:43.894 "enable": false 00:18:43.894 } 00:18:43.894 }, 00:18:43.894 { 00:18:43.894 "method": "bdev_enable_histogram", 00:18:43.894 "params": { 00:18:43.894 "name": "nvme0n1", 00:18:43.894 "enable": true 00:18:43.894 } 00:18:43.894 }, 00:18:43.894 { 00:18:43.894 "method": "bdev_wait_for_examine" 00:18:43.894 } 00:18:43.894 ] 00:18:43.894 }, 00:18:43.894 { 00:18:43.894 "subsystem": "nbd", 00:18:43.894 "config": [] 00:18:43.894 } 00:18:43.894 ] 00:18:43.894 }' 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2347191 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2347191 ']' 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2347191 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2347191 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2347191' 00:18:43.894 killing process with pid 2347191 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2347191 00:18:43.894 Received shutdown signal, test time was about 1.000000 seconds 00:18:43.894 00:18:43.894 Latency(us) 00:18:43.894 [2024-11-18T12:01:41.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.894 [2024-11-18T12:01:41.596Z] =================================================================================================================== 00:18:43.894 [2024-11-18T12:01:41.596Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2347191 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2347161 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2347161 ']' 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2347161 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:43.894 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2347161 00:18:44.154 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:44.154 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:44.154 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2347161' 00:18:44.154 killing process with pid 2347161 00:18:44.154 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2347161 00:18:44.154 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2347161 00:18:44.154 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:44.154 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:44.155 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.155 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:44.155 "subsystems": [ 00:18:44.155 { 00:18:44.155 "subsystem": "keyring", 00:18:44.155 "config": [ 00:18:44.155 { 00:18:44.155 "method": "keyring_file_add_key", 00:18:44.155 "params": { 00:18:44.155 "name": "key0", 00:18:44.155 "path": "/tmp/tmp.Brr5j5DZrC" 00:18:44.155 } 00:18:44.155 } 00:18:44.155 ] 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "subsystem": "iobuf", 00:18:44.155 "config": [ 00:18:44.155 { 00:18:44.155 "method": "iobuf_set_options", 00:18:44.155 "params": { 00:18:44.155 "small_pool_count": 8192, 00:18:44.155 "large_pool_count": 1024, 00:18:44.155 "small_bufsize": 8192, 00:18:44.155 "large_bufsize": 135168, 00:18:44.155 "enable_numa": false 00:18:44.155 } 00:18:44.155 } 00:18:44.155 ] 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "subsystem": "sock", 00:18:44.155 "config": [ 00:18:44.155 { 00:18:44.155 "method": "sock_set_default_impl", 00:18:44.155 "params": { 00:18:44.155 "impl_name": "posix" 00:18:44.155 } 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "method": "sock_impl_set_options", 00:18:44.155 "params": { 00:18:44.155 "impl_name": "ssl", 00:18:44.155 "recv_buf_size": 4096, 00:18:44.155 "send_buf_size": 4096, 00:18:44.155 "enable_recv_pipe": true, 00:18:44.155 "enable_quickack": false, 00:18:44.155 "enable_placement_id": 0, 00:18:44.155 "enable_zerocopy_send_server": true, 00:18:44.155 "enable_zerocopy_send_client": false, 00:18:44.155 "zerocopy_threshold": 0, 00:18:44.155 "tls_version": 0, 00:18:44.155 "enable_ktls": false 00:18:44.155 } 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "method": "sock_impl_set_options", 00:18:44.155 "params": { 00:18:44.155 "impl_name": "posix", 00:18:44.155 "recv_buf_size": 2097152, 00:18:44.155 "send_buf_size": 2097152, 00:18:44.155 "enable_recv_pipe": true, 00:18:44.155 "enable_quickack": false, 00:18:44.155 "enable_placement_id": 0, 00:18:44.155 "enable_zerocopy_send_server": true, 00:18:44.155 "enable_zerocopy_send_client": false, 00:18:44.155 "zerocopy_threshold": 0, 00:18:44.155 "tls_version": 0, 00:18:44.155 "enable_ktls": false 00:18:44.155 } 00:18:44.155 } 00:18:44.155 ] 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "subsystem": "vmd", 00:18:44.155 "config": [] 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "subsystem": "accel", 00:18:44.155 "config": [ 00:18:44.155 { 00:18:44.155 "method": "accel_set_options", 00:18:44.155 "params": { 00:18:44.155 "small_cache_size": 128, 00:18:44.155 "large_cache_size": 16, 00:18:44.155 "task_count": 2048, 00:18:44.155 "sequence_count": 2048, 00:18:44.155 "buf_count": 2048 00:18:44.155 } 00:18:44.155 } 00:18:44.155 ] 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "subsystem": "bdev", 00:18:44.155 "config": [ 00:18:44.155 { 00:18:44.155 "method": "bdev_set_options", 00:18:44.155 "params": { 00:18:44.155 "bdev_io_pool_size": 65535, 00:18:44.155 "bdev_io_cache_size": 256, 00:18:44.155 "bdev_auto_examine": true, 00:18:44.155 "iobuf_small_cache_size": 128, 00:18:44.155 "iobuf_large_cache_size": 16 00:18:44.155 } 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "method": "bdev_raid_set_options", 00:18:44.155 "params": { 00:18:44.155 "process_window_size_kb": 1024, 00:18:44.155 "process_max_bandwidth_mb_sec": 0 00:18:44.155 } 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "method": "bdev_iscsi_set_options", 00:18:44.155 "params": { 00:18:44.155 "timeout_sec": 30 00:18:44.155 } 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "method": "bdev_nvme_set_options", 00:18:44.155 "params": { 00:18:44.155 "action_on_timeout": "none", 00:18:44.155 "timeout_us": 0, 00:18:44.155 "timeout_admin_us": 0, 00:18:44.155 "keep_alive_timeout_ms": 10000, 00:18:44.155 "arbitration_burst": 0, 00:18:44.155 "low_priority_weight": 0, 00:18:44.155 "medium_priority_weight": 0, 00:18:44.155 "high_priority_weight": 0, 00:18:44.155 "nvme_adminq_poll_period_us": 10000, 00:18:44.155 "nvme_ioq_poll_period_us": 0, 00:18:44.155 "io_queue_requests": 0, 00:18:44.155 "delay_cmd_submit": true, 00:18:44.155 "transport_retry_count": 4, 00:18:44.155 "bdev_retry_count": 3, 00:18:44.155 "transport_ack_timeout": 0, 00:18:44.155 "ctrlr_loss_timeout_sec": 0, 00:18:44.155 "reconnect_delay_sec": 0, 00:18:44.155 "fast_io_fail_timeout_sec": 0, 00:18:44.155 "disable_auto_failback": false, 00:18:44.155 "generate_uuids": false, 00:18:44.155 "transport_tos": 0, 00:18:44.155 "nvme_error_stat": false, 00:18:44.155 "rdma_srq_size": 0, 00:18:44.155 "io_path_stat": false, 00:18:44.155 "allow_accel_sequence": false, 00:18:44.155 "rdma_max_cq_size": 0, 00:18:44.155 "rdma_cm_event_timeout_ms": 0, 00:18:44.155 "dhchap_digests": [ 00:18:44.155 "sha256", 00:18:44.155 "sha384", 00:18:44.155 "sha512" 00:18:44.155 ], 00:18:44.155 "dhchap_dhgroups": [ 00:18:44.155 "null", 00:18:44.155 "ffdhe2048", 00:18:44.155 "ffdhe3072", 00:18:44.155 "ffdhe4096", 00:18:44.155 "ffdhe6144", 00:18:44.155 "ffdhe8192" 00:18:44.155 ] 00:18:44.155 } 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "method": "bdev_nvme_set_hotplug", 00:18:44.155 "params": { 00:18:44.155 "period_us": 100000, 00:18:44.155 "enable": false 00:18:44.155 } 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "method": "bdev_malloc_create", 00:18:44.155 "params": { 00:18:44.155 "name": "malloc0", 00:18:44.155 "num_blocks": 8192, 00:18:44.155 "block_size": 4096, 00:18:44.155 "physical_block_size": 4096, 00:18:44.155 "uuid": "62f484f5-1ef0-4421-9269-f5aa29528813", 00:18:44.155 "optimal_io_boundary": 0, 00:18:44.155 "md_size": 0, 00:18:44.155 "dif_type": 0, 00:18:44.155 "dif_is_head_of_md": false, 00:18:44.155 "dif_pi_format": 0 00:18:44.155 } 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "method": "bdev_wait_for_examine" 00:18:44.155 } 00:18:44.155 ] 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "subsystem": "nbd", 00:18:44.155 "config": [] 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "subsystem": "scheduler", 00:18:44.155 "config": [ 00:18:44.155 { 00:18:44.155 "method": "framework_set_scheduler", 00:18:44.155 "params": { 00:18:44.155 "name": "static" 00:18:44.155 } 00:18:44.155 } 00:18:44.155 ] 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "subsystem": "nvmf", 00:18:44.155 "config": [ 00:18:44.155 { 00:18:44.155 "method": "nvmf_set_config", 00:18:44.155 "params": { 00:18:44.155 "discovery_filter": "match_any", 00:18:44.155 "admin_cmd_passthru": { 00:18:44.155 "identify_ctrlr": false 00:18:44.155 }, 00:18:44.155 "dhchap_digests": [ 00:18:44.155 "sha256", 00:18:44.155 "sha384", 00:18:44.155 "sha512" 00:18:44.155 ], 00:18:44.155 "dhchap_dhgroups": [ 00:18:44.155 "null", 00:18:44.155 "ffdhe2048", 00:18:44.155 "ffdhe3072", 00:18:44.155 "ffdhe4096", 00:18:44.155 "ffdhe6144", 00:18:44.155 "ffdhe8192" 00:18:44.155 ] 00:18:44.155 } 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "method": "nvmf_set_max_subsystems", 00:18:44.155 "params": { 00:18:44.155 "max_subsystems": 1024 00:18:44.155 } 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "method": "nvmf_set_crdt", 00:18:44.155 "params": { 00:18:44.155 "crdt1": 0, 00:18:44.155 "crdt2": 0, 00:18:44.155 "crdt3": 0 00:18:44.155 } 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "method": "nvmf_create_transport", 00:18:44.155 "params": { 00:18:44.155 "trtype": "TCP", 00:18:44.155 "max_queue_depth": 128, 00:18:44.155 "max_io_qpairs_per_ctrlr": 127, 00:18:44.155 "in_capsule_data_size": 4096, 00:18:44.155 "max_io_size": 131072, 00:18:44.155 "io_unit_size": 131072, 00:18:44.155 "max_aq_depth": 128, 00:18:44.155 "num_shared_buffers": 511, 00:18:44.155 "buf_cache_size": 4294967295, 00:18:44.155 "dif_insert_or_strip": false, 00:18:44.155 "zcopy": false, 00:18:44.155 "c2h_success": false, 00:18:44.155 "sock_priority": 0, 00:18:44.155 "abort_timeout_sec": 1, 00:18:44.155 "ack_timeout": 0, 00:18:44.155 "data_wr_pool_size": 0 00:18:44.155 } 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "method": "nvmf_create_subsystem", 00:18:44.155 "params": { 00:18:44.155 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.155 "allow_any_host": false, 00:18:44.155 "serial_number": "00000000000000000000", 00:18:44.155 "model_number": "SPDK bdev Controller", 00:18:44.155 "max_namespaces": 32, 00:18:44.155 "min_cntlid": 1, 00:18:44.155 "max_cntlid": 65519, 00:18:44.155 "ana_reporting": false 00:18:44.155 } 00:18:44.155 }, 00:18:44.155 { 00:18:44.155 "method": "nvmf_subsystem_add_host", 00:18:44.156 "params": { 00:18:44.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.156 "host": "nqn.2016-06.io.spdk:host1", 00:18:44.156 "psk": "key0" 00:18:44.156 } 00:18:44.156 }, 00:18:44.156 { 00:18:44.156 "method": "nvmf_subsystem_add_ns", 00:18:44.156 "params": { 00:18:44.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.156 "namespace": { 00:18:44.156 "nsid": 1, 00:18:44.156 "bdev_name": "malloc0", 00:18:44.156 "nguid": "62F484F51EF044219269F5AA29528813", 00:18:44.156 "uuid": "62f484f5-1ef0-4421-9269-f5aa29528813", 00:18:44.156 "no_auto_visible": false 00:18:44.156 } 00:18:44.156 } 00:18:44.156 }, 00:18:44.156 { 00:18:44.156 "method": "nvmf_subsystem_add_listener", 00:18:44.156 "params": { 00:18:44.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.156 "listen_address": { 00:18:44.156 "trtype": "TCP", 00:18:44.156 "adrfam": "IPv4", 00:18:44.156 "traddr": "10.0.0.2", 00:18:44.156 "trsvcid": "4420" 00:18:44.156 }, 00:18:44.156 "secure_channel": false, 00:18:44.156 "sock_impl": "ssl" 00:18:44.156 } 00:18:44.156 } 00:18:44.156 ] 00:18:44.156 } 00:18:44.156 ] 00:18:44.156 }' 00:18:44.156 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.156 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2347659 00:18:44.156 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:44.156 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2347659 00:18:44.156 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2347659 ']' 00:18:44.156 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.156 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:44.156 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.156 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:44.156 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.156 [2024-11-18 13:01:41.838306] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:44.156 [2024-11-18 13:01:41.838350] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.416 [2024-11-18 13:01:41.913849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.416 [2024-11-18 13:01:41.954841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.416 [2024-11-18 13:01:41.954877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.416 [2024-11-18 13:01:41.954884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.416 [2024-11-18 13:01:41.954890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.416 [2024-11-18 13:01:41.954899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.416 [2024-11-18 13:01:41.955512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.676 [2024-11-18 13:01:42.168920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.676 [2024-11-18 13:01:42.200950] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:44.676 [2024-11-18 13:01:42.201166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.246 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:45.246 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:45.246 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.246 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:45.246 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.246 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.246 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2347904 00:18:45.246 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2347904 /var/tmp/bdevperf.sock 00:18:45.246 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2347904 ']' 00:18:45.246 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.246 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:45.246 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:45.246 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.246 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:45.246 "subsystems": [ 00:18:45.246 { 00:18:45.246 "subsystem": "keyring", 00:18:45.246 "config": [ 00:18:45.246 { 00:18:45.246 "method": "keyring_file_add_key", 00:18:45.246 "params": { 00:18:45.246 "name": "key0", 00:18:45.246 "path": "/tmp/tmp.Brr5j5DZrC" 00:18:45.246 } 00:18:45.246 } 00:18:45.246 ] 00:18:45.246 }, 00:18:45.246 { 00:18:45.246 "subsystem": "iobuf", 00:18:45.246 "config": [ 00:18:45.246 { 00:18:45.246 "method": "iobuf_set_options", 00:18:45.246 "params": { 00:18:45.246 "small_pool_count": 8192, 00:18:45.246 "large_pool_count": 1024, 00:18:45.246 "small_bufsize": 8192, 00:18:45.246 "large_bufsize": 135168, 00:18:45.246 "enable_numa": false 00:18:45.246 } 00:18:45.246 } 00:18:45.246 ] 00:18:45.246 }, 00:18:45.246 { 00:18:45.246 "subsystem": "sock", 00:18:45.246 "config": [ 00:18:45.246 { 00:18:45.246 "method": "sock_set_default_impl", 00:18:45.246 "params": { 00:18:45.246 "impl_name": "posix" 00:18:45.246 } 00:18:45.246 }, 00:18:45.246 { 00:18:45.246 "method": "sock_impl_set_options", 00:18:45.246 "params": { 00:18:45.246 "impl_name": "ssl", 00:18:45.246 "recv_buf_size": 4096, 00:18:45.246 "send_buf_size": 4096, 00:18:45.246 "enable_recv_pipe": true, 00:18:45.246 "enable_quickack": false, 00:18:45.246 "enable_placement_id": 0, 00:18:45.246 "enable_zerocopy_send_server": true, 00:18:45.246 "enable_zerocopy_send_client": false, 00:18:45.246 "zerocopy_threshold": 0, 00:18:45.246 "tls_version": 0, 00:18:45.246 "enable_ktls": false 00:18:45.246 } 00:18:45.246 }, 00:18:45.246 { 00:18:45.246 "method": "sock_impl_set_options", 00:18:45.246 "params": { 00:18:45.246 "impl_name": "posix", 00:18:45.246 "recv_buf_size": 2097152, 00:18:45.246 "send_buf_size": 2097152, 00:18:45.246 "enable_recv_pipe": true, 00:18:45.246 "enable_quickack": false, 00:18:45.246 "enable_placement_id": 0, 00:18:45.246 "enable_zerocopy_send_server": true, 00:18:45.246 "enable_zerocopy_send_client": false, 00:18:45.246 "zerocopy_threshold": 0, 00:18:45.246 "tls_version": 0, 00:18:45.246 "enable_ktls": false 00:18:45.246 } 00:18:45.246 } 00:18:45.246 ] 00:18:45.246 }, 00:18:45.246 { 00:18:45.246 "subsystem": "vmd", 00:18:45.246 "config": [] 00:18:45.246 }, 00:18:45.246 { 00:18:45.246 "subsystem": "accel", 00:18:45.246 "config": [ 00:18:45.246 { 00:18:45.246 "method": "accel_set_options", 00:18:45.246 "params": { 00:18:45.246 "small_cache_size": 128, 00:18:45.246 "large_cache_size": 16, 00:18:45.246 "task_count": 2048, 00:18:45.246 "sequence_count": 2048, 00:18:45.246 "buf_count": 2048 00:18:45.246 } 00:18:45.246 } 00:18:45.246 ] 00:18:45.246 }, 00:18:45.246 { 00:18:45.246 "subsystem": "bdev", 00:18:45.246 "config": [ 00:18:45.246 { 00:18:45.246 "method": "bdev_set_options", 00:18:45.246 "params": { 00:18:45.246 "bdev_io_pool_size": 65535, 00:18:45.246 "bdev_io_cache_size": 256, 00:18:45.246 "bdev_auto_examine": true, 00:18:45.246 "iobuf_small_cache_size": 128, 00:18:45.246 "iobuf_large_cache_size": 16 00:18:45.246 } 00:18:45.246 }, 00:18:45.246 { 00:18:45.246 "method": "bdev_raid_set_options", 00:18:45.246 "params": { 00:18:45.246 "process_window_size_kb": 1024, 00:18:45.246 "process_max_bandwidth_mb_sec": 0 00:18:45.246 } 00:18:45.246 }, 00:18:45.246 { 00:18:45.247 "method": "bdev_iscsi_set_options", 00:18:45.247 "params": { 00:18:45.247 "timeout_sec": 30 00:18:45.247 } 00:18:45.247 }, 00:18:45.247 { 00:18:45.247 "method": "bdev_nvme_set_options", 00:18:45.247 "params": { 00:18:45.247 "action_on_timeout": "none", 00:18:45.247 "timeout_us": 0, 00:18:45.247 "timeout_admin_us": 0, 00:18:45.247 "keep_alive_timeout_ms": 10000, 00:18:45.247 "arbitration_burst": 0, 00:18:45.247 "low_priority_weight": 0, 00:18:45.247 "medium_priority_weight": 0, 00:18:45.247 "high_priority_weight": 0, 00:18:45.247 "nvme_adminq_poll_period_us": 10000, 00:18:45.247 "nvme_ioq_poll_period_us": 0, 00:18:45.247 "io_queue_requests": 512, 00:18:45.247 "delay_cmd_submit": true, 00:18:45.247 "transport_retry_count": 4, 00:18:45.247 "bdev_retry_count": 3, 00:18:45.247 "transport_ack_timeout": 0, 00:18:45.247 "ctrlr_loss_timeout_sec": 0, 00:18:45.247 "reconnect_delay_sec": 0, 00:18:45.247 "fast_io_fail_timeout_sec": 0, 00:18:45.247 "disable_auto_failback": false, 00:18:45.247 "generate_uuids": false, 00:18:45.247 "transport_tos": 0, 00:18:45.247 "nvme_error_stat": false, 00:18:45.247 "rdma_srq_size": 0, 00:18:45.247 "io_path_stat": false, 00:18:45.247 "allow_accel_sequence": false, 00:18:45.247 "rdma_max_cq_size": 0, 00:18:45.247 "rdma_cm_event_timeout_ms": 0, 00:18:45.247 "dhchap_digests": [ 00:18:45.247 "sha256", 00:18:45.247 "sha384", 00:18:45.247 "sha512" 00:18:45.247 ], 00:18:45.247 "dhchap_dhgroups": [ 00:18:45.247 "null", 00:18:45.247 "ffdhe2048", 00:18:45.247 "ffdhe3072", 00:18:45.247 "ffdhe4096", 00:18:45.247 "ffdhe6144", 00:18:45.247 "ffdhe8192" 00:18:45.247 ] 00:18:45.247 } 00:18:45.247 }, 00:18:45.247 { 00:18:45.247 "method": "bdev_nvme_attach_controller", 00:18:45.247 "params": { 00:18:45.247 "name": "nvme0", 00:18:45.247 "trtype": "TCP", 00:18:45.247 "adrfam": "IPv4", 00:18:45.247 "traddr": "10.0.0.2", 00:18:45.247 "trsvcid": "4420", 00:18:45.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.247 "prchk_reftag": false, 00:18:45.247 "prchk_guard": false, 00:18:45.247 "ctrlr_loss_timeout_sec": 0, 00:18:45.247 "reconnect_delay_sec": 0, 00:18:45.247 "fast_io_fail_timeout_sec": 0, 00:18:45.247 "psk": "key0", 00:18:45.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.247 "hdgst": false, 00:18:45.247 "ddgst": false, 00:18:45.247 "multipath": "multipath" 00:18:45.247 } 00:18:45.247 }, 00:18:45.247 { 00:18:45.247 "method": "bdev_nvme_set_hotplug", 00:18:45.247 "params": { 00:18:45.247 "period_us": 100000, 00:18:45.247 "enable": false 00:18:45.247 } 00:18:45.247 }, 00:18:45.247 { 00:18:45.247 "method": "bdev_enable_histogram", 00:18:45.247 "params": { 00:18:45.247 "name": "nvme0n1", 00:18:45.247 "enable": true 00:18:45.247 } 00:18:45.247 }, 00:18:45.247 { 00:18:45.247 "method": "bdev_wait_for_examine" 00:18:45.247 } 00:18:45.247 ] 00:18:45.247 }, 00:18:45.247 { 00:18:45.247 "subsystem": "nbd", 00:18:45.247 "config": [] 00:18:45.247 } 00:18:45.247 ] 00:18:45.247 }' 00:18:45.247 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:45.247 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.247 [2024-11-18 13:01:42.763237] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:45.247 [2024-11-18 13:01:42.763284] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2347904 ] 00:18:45.247 [2024-11-18 13:01:42.838028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.247 [2024-11-18 13:01:42.880198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.506 [2024-11-18 13:01:43.032547] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.076 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:46.076 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:46.076 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:46.076 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:46.335 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.335 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:46.335 Running I/O for 1 seconds... 00:18:47.274 5248.00 IOPS, 20.50 MiB/s 00:18:47.274 Latency(us) 00:18:47.274 [2024-11-18T12:01:44.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.274 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:47.274 Verification LBA range: start 0x0 length 0x2000 00:18:47.274 nvme0n1 : 1.01 5306.98 20.73 0.00 0.00 23955.77 5442.34 40347.38 00:18:47.274 [2024-11-18T12:01:44.976Z] =================================================================================================================== 00:18:47.274 [2024-11-18T12:01:44.976Z] Total : 5306.98 20.73 0.00 0.00 23955.77 5442.34 40347.38 00:18:47.274 { 00:18:47.274 "results": [ 00:18:47.274 { 00:18:47.274 "job": "nvme0n1", 00:18:47.274 "core_mask": "0x2", 00:18:47.274 "workload": "verify", 00:18:47.274 "status": "finished", 00:18:47.274 "verify_range": { 00:18:47.274 "start": 0, 00:18:47.274 "length": 8192 00:18:47.274 }, 00:18:47.274 "queue_depth": 128, 00:18:47.274 "io_size": 4096, 00:18:47.274 "runtime": 1.013006, 00:18:47.274 "iops": 5306.977451268798, 00:18:47.274 "mibps": 20.730380669018743, 00:18:47.274 "io_failed": 0, 00:18:47.274 "io_timeout": 0, 00:18:47.274 "avg_latency_us": 23955.76712215321, 00:18:47.274 "min_latency_us": 5442.337391304348, 00:18:47.274 "max_latency_us": 40347.38086956522 00:18:47.274 } 00:18:47.274 ], 00:18:47.274 "core_count": 1 00:18:47.274 } 00:18:47.274 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:47.274 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:47.274 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:47.274 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:18:47.274 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:18:47.274 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:18:47.274 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:47.274 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:18:47.274 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:18:47.274 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:18:47.274 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:47.274 nvmf_trace.0 00:18:47.533 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:18:47.533 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2347904 00:18:47.533 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2347904 ']' 00:18:47.533 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2347904 00:18:47.533 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:47.533 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:47.533 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2347904 00:18:47.533 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:47.533 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:47.533 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2347904' 00:18:47.533 killing process with pid 2347904 00:18:47.533 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2347904 00:18:47.533 Received shutdown signal, test time was about 1.000000 seconds 00:18:47.533 00:18:47.533 Latency(us) 00:18:47.533 [2024-11-18T12:01:45.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.533 [2024-11-18T12:01:45.235Z] =================================================================================================================== 00:18:47.533 [2024-11-18T12:01:45.235Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:47.533 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2347904 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:47.793 rmmod nvme_tcp 00:18:47.793 rmmod nvme_fabrics 00:18:47.793 rmmod nvme_keyring 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2347659 ']' 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2347659 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2347659 ']' 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2347659 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2347659 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2347659' 00:18:47.793 killing process with pid 2347659 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2347659 00:18:47.793 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2347659 00:18:48.053 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:48.053 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:48.053 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:48.053 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:48.053 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:48.053 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:48.053 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:48.053 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:48.053 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:48.053 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.053 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.053 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.961 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:49.961 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.OgqzLQiEoV /tmp/tmp.gtTo1Oxmtq /tmp/tmp.Brr5j5DZrC 00:18:49.961 00:18:49.961 real 1m19.701s 00:18:49.961 user 2m2.711s 00:18:49.961 sys 0m30.045s 00:18:49.961 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:49.961 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.961 ************************************ 00:18:49.961 END TEST nvmf_tls 00:18:49.961 ************************************ 00:18:49.961 13:01:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:49.961 13:01:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:49.961 13:01:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:49.961 13:01:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:50.222 ************************************ 00:18:50.222 START TEST nvmf_fips 00:18:50.222 ************************************ 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:50.222 * Looking for test storage... 00:18:50.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.222 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.222 --rc genhtml_branch_coverage=1 00:18:50.222 --rc genhtml_function_coverage=1 00:18:50.222 --rc genhtml_legend=1 00:18:50.223 --rc geninfo_all_blocks=1 00:18:50.223 --rc geninfo_unexecuted_blocks=1 00:18:50.223 00:18:50.223 ' 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:50.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.223 --rc genhtml_branch_coverage=1 00:18:50.223 --rc genhtml_function_coverage=1 00:18:50.223 --rc genhtml_legend=1 00:18:50.223 --rc geninfo_all_blocks=1 00:18:50.223 --rc geninfo_unexecuted_blocks=1 00:18:50.223 00:18:50.223 ' 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:50.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.223 --rc genhtml_branch_coverage=1 00:18:50.223 --rc genhtml_function_coverage=1 00:18:50.223 --rc genhtml_legend=1 00:18:50.223 --rc geninfo_all_blocks=1 00:18:50.223 --rc geninfo_unexecuted_blocks=1 00:18:50.223 00:18:50.223 ' 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:50.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.223 --rc genhtml_branch_coverage=1 00:18:50.223 --rc genhtml_function_coverage=1 00:18:50.223 --rc genhtml_legend=1 00:18:50.223 --rc geninfo_all_blocks=1 00:18:50.223 --rc geninfo_unexecuted_blocks=1 00:18:50.223 00:18:50.223 ' 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:50.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:50.223 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.224 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:50.484 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:50.484 Error setting digest 00:18:50.484 4082CCC2217F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:50.484 4082CCC2217F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:50.484 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:57.077 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:57.077 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:57.077 Found net devices under 0000:86:00.0: cvl_0_0 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:57.077 Found net devices under 0000:86:00.1: cvl_0_1 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:57.077 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:57.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:18:57.078 00:18:57.078 --- 10.0.0.2 ping statistics --- 00:18:57.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.078 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:18:57.078 00:18:57.078 --- 10.0.0.1 ping statistics --- 00:18:57.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.078 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2351925 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2351925 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 2351925 ']' 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:57.078 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:57.078 [2024-11-18 13:01:54.061081] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:57.078 [2024-11-18 13:01:54.061131] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.078 [2024-11-18 13:01:54.138239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.078 [2024-11-18 13:01:54.179869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.078 [2024-11-18 13:01:54.179907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.078 [2024-11-18 13:01:54.179915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.078 [2024-11-18 13:01:54.179923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.078 [2024-11-18 13:01:54.179928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.078 [2024-11-18 13:01:54.180500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.bGC 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.bGC 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.bGC 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.bGC 00:18:57.337 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:57.595 [2024-11-18 13:01:55.104703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.595 [2024-11-18 13:01:55.120713] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:57.595 [2024-11-18 13:01:55.120903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.595 malloc0 00:18:57.596 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:57.596 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2352074 00:18:57.596 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:57.596 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2352074 /var/tmp/bdevperf.sock 00:18:57.596 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 2352074 ']' 00:18:57.596 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:57.596 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:57.596 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:57.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:57.596 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:57.596 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:57.596 [2024-11-18 13:01:55.250818] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:57.596 [2024-11-18 13:01:55.250869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352074 ] 00:18:57.855 [2024-11-18 13:01:55.326263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.855 [2024-11-18 13:01:55.366668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.422 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:58.422 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:18:58.422 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.bGC 00:18:58.681 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:58.939 [2024-11-18 13:01:56.435610] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:58.939 TLSTESTn1 00:18:58.939 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:58.939 Running I/O for 10 seconds... 00:19:01.252 5092.00 IOPS, 19.89 MiB/s [2024-11-18T12:01:59.890Z] 5167.00 IOPS, 20.18 MiB/s [2024-11-18T12:02:00.826Z] 5012.00 IOPS, 19.58 MiB/s [2024-11-18T12:02:01.764Z] 4891.25 IOPS, 19.11 MiB/s [2024-11-18T12:02:02.699Z] 4814.00 IOPS, 18.80 MiB/s [2024-11-18T12:02:04.078Z] 4749.17 IOPS, 18.55 MiB/s [2024-11-18T12:02:05.015Z] 4709.71 IOPS, 18.40 MiB/s [2024-11-18T12:02:05.951Z] 4679.50 IOPS, 18.28 MiB/s [2024-11-18T12:02:06.887Z] 4658.56 IOPS, 18.20 MiB/s [2024-11-18T12:02:06.887Z] 4640.70 IOPS, 18.13 MiB/s 00:19:09.185 Latency(us) 00:19:09.185 [2024-11-18T12:02:06.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.185 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:09.185 Verification LBA range: start 0x0 length 0x2000 00:19:09.185 TLSTESTn1 : 10.04 4633.91 18.10 0.00 0.00 27559.61 6496.61 49465.43 00:19:09.185 [2024-11-18T12:02:06.887Z] =================================================================================================================== 00:19:09.185 [2024-11-18T12:02:06.887Z] Total : 4633.91 18.10 0.00 0.00 27559.61 6496.61 49465.43 00:19:09.185 { 00:19:09.185 "results": [ 00:19:09.185 { 00:19:09.185 "job": "TLSTESTn1", 00:19:09.185 "core_mask": "0x4", 00:19:09.185 "workload": "verify", 00:19:09.185 "status": "finished", 00:19:09.185 "verify_range": { 00:19:09.185 "start": 0, 00:19:09.185 "length": 8192 00:19:09.185 }, 00:19:09.185 "queue_depth": 128, 00:19:09.185 "io_size": 4096, 00:19:09.185 "runtime": 10.042276, 00:19:09.185 "iops": 4633.909683422364, 00:19:09.185 "mibps": 18.10120970086861, 00:19:09.185 "io_failed": 0, 00:19:09.185 "io_timeout": 0, 00:19:09.185 "avg_latency_us": 27559.61089132537, 00:19:09.185 "min_latency_us": 6496.612173913043, 00:19:09.185 "max_latency_us": 49465.43304347826 00:19:09.185 } 00:19:09.185 ], 00:19:09.185 "core_count": 1 00:19:09.185 } 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:09.185 nvmf_trace.0 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2352074 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 2352074 ']' 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 2352074 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2352074 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2352074' 00:19:09.185 killing process with pid 2352074 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 2352074 00:19:09.185 Received shutdown signal, test time was about 10.000000 seconds 00:19:09.185 00:19:09.185 Latency(us) 00:19:09.185 [2024-11-18T12:02:06.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.185 [2024-11-18T12:02:06.887Z] =================================================================================================================== 00:19:09.185 [2024-11-18T12:02:06.887Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.185 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 2352074 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:09.444 rmmod nvme_tcp 00:19:09.444 rmmod nvme_fabrics 00:19:09.444 rmmod nvme_keyring 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2351925 ']' 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2351925 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 2351925 ']' 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 2351925 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2351925 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2351925' 00:19:09.444 killing process with pid 2351925 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 2351925 00:19:09.444 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 2351925 00:19:09.703 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:09.703 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:09.703 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:09.703 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:09.703 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:09.703 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:09.703 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:09.703 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:09.703 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:09.703 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.703 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.703 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.bGC 00:19:12.240 00:19:12.240 real 0m21.686s 00:19:12.240 user 0m22.875s 00:19:12.240 sys 0m10.257s 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:12.240 ************************************ 00:19:12.240 END TEST nvmf_fips 00:19:12.240 ************************************ 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:12.240 ************************************ 00:19:12.240 START TEST nvmf_control_msg_list 00:19:12.240 ************************************ 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:12.240 * Looking for test storage... 00:19:12.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:12.240 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:12.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.241 --rc genhtml_branch_coverage=1 00:19:12.241 --rc genhtml_function_coverage=1 00:19:12.241 --rc genhtml_legend=1 00:19:12.241 --rc geninfo_all_blocks=1 00:19:12.241 --rc geninfo_unexecuted_blocks=1 00:19:12.241 00:19:12.241 ' 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:12.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.241 --rc genhtml_branch_coverage=1 00:19:12.241 --rc genhtml_function_coverage=1 00:19:12.241 --rc genhtml_legend=1 00:19:12.241 --rc geninfo_all_blocks=1 00:19:12.241 --rc geninfo_unexecuted_blocks=1 00:19:12.241 00:19:12.241 ' 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:12.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.241 --rc genhtml_branch_coverage=1 00:19:12.241 --rc genhtml_function_coverage=1 00:19:12.241 --rc genhtml_legend=1 00:19:12.241 --rc geninfo_all_blocks=1 00:19:12.241 --rc geninfo_unexecuted_blocks=1 00:19:12.241 00:19:12.241 ' 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:12.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.241 --rc genhtml_branch_coverage=1 00:19:12.241 --rc genhtml_function_coverage=1 00:19:12.241 --rc genhtml_legend=1 00:19:12.241 --rc geninfo_all_blocks=1 00:19:12.241 --rc geninfo_unexecuted_blocks=1 00:19:12.241 00:19:12.241 ' 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:12.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:12.241 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:12.242 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.817 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.817 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:18.817 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:18.817 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:18.818 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:18.818 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:18.818 Found net devices under 0000:86:00.0: cvl_0_0 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:18.818 Found net devices under 0000:86:00.1: cvl_0_1 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:18.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:18.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:19:18.818 00:19:18.818 --- 10.0.0.2 ping statistics --- 00:19:18.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.818 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:19:18.818 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:18.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:18.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:19:18.818 00:19:18.819 --- 10.0.0.1 ping statistics --- 00:19:18.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.819 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2357550 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2357550 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 2357550 ']' 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.819 [2024-11-18 13:02:15.701687] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:19:18.819 [2024-11-18 13:02:15.701739] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.819 [2024-11-18 13:02:15.782772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.819 [2024-11-18 13:02:15.824195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.819 [2024-11-18 13:02:15.824231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.819 [2024-11-18 13:02:15.824239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.819 [2024-11-18 13:02:15.824245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.819 [2024-11-18 13:02:15.824251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.819 [2024-11-18 13:02:15.824841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.819 [2024-11-18 13:02:15.960368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.819 Malloc0 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.819 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.819 [2024-11-18 13:02:16.004673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.819 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.819 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2357579 00:19:18.819 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:18.819 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2357580 00:19:18.819 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:18.819 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2357581 00:19:18.819 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2357579 00:19:18.819 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:18.819 [2024-11-18 13:02:16.089141] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:18.819 [2024-11-18 13:02:16.099359] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:18.819 [2024-11-18 13:02:16.099535] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:19.757 Initializing NVMe Controllers 00:19:19.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:19.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:19.757 Initialization complete. Launching workers. 00:19:19.757 ======================================================== 00:19:19.757 Latency(us) 00:19:19.757 Device Information : IOPS MiB/s Average min max 00:19:19.757 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6208.00 24.25 160.72 126.15 380.76 00:19:19.757 ======================================================== 00:19:19.757 Total : 6208.00 24.25 160.72 126.15 380.76 00:19:19.757 00:19:19.757 Initializing NVMe Controllers 00:19:19.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:19.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:19.757 Initialization complete. Launching workers. 00:19:19.757 ======================================================== 00:19:19.757 Latency(us) 00:19:19.757 Device Information : IOPS MiB/s Average min max 00:19:19.757 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6303.00 24.62 158.31 131.76 370.11 00:19:19.757 ======================================================== 00:19:19.757 Total : 6303.00 24.62 158.31 131.76 370.11 00:19:19.757 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2357580 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2357581 00:19:19.757 Initializing NVMe Controllers 00:19:19.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:19.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:19.757 Initialization complete. Launching workers. 00:19:19.757 ======================================================== 00:19:19.757 Latency(us) 00:19:19.757 Device Information : IOPS MiB/s Average min max 00:19:19.757 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40967.76 40697.04 41890.19 00:19:19.757 ======================================================== 00:19:19.757 Total : 25.00 0.10 40967.76 40697.04 41890.19 00:19:19.757 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:19.757 rmmod nvme_tcp 00:19:19.757 rmmod nvme_fabrics 00:19:19.757 rmmod nvme_keyring 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2357550 ']' 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2357550 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 2357550 ']' 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 2357550 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:19.757 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2357550 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2357550' 00:19:20.016 killing process with pid 2357550 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 2357550 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 2357550 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.016 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:22.552 00:19:22.552 real 0m10.279s 00:19:22.552 user 0m6.596s 00:19:22.552 sys 0m5.704s 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:22.552 ************************************ 00:19:22.552 END TEST nvmf_control_msg_list 00:19:22.552 ************************************ 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:22.552 ************************************ 00:19:22.552 START TEST nvmf_wait_for_buf 00:19:22.552 ************************************ 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:22.552 * Looking for test storage... 00:19:22.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:22.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.552 --rc genhtml_branch_coverage=1 00:19:22.552 --rc genhtml_function_coverage=1 00:19:22.552 --rc genhtml_legend=1 00:19:22.552 --rc geninfo_all_blocks=1 00:19:22.552 --rc geninfo_unexecuted_blocks=1 00:19:22.552 00:19:22.552 ' 00:19:22.552 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:22.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.553 --rc genhtml_branch_coverage=1 00:19:22.553 --rc genhtml_function_coverage=1 00:19:22.553 --rc genhtml_legend=1 00:19:22.553 --rc geninfo_all_blocks=1 00:19:22.553 --rc geninfo_unexecuted_blocks=1 00:19:22.553 00:19:22.553 ' 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:22.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.553 --rc genhtml_branch_coverage=1 00:19:22.553 --rc genhtml_function_coverage=1 00:19:22.553 --rc genhtml_legend=1 00:19:22.553 --rc geninfo_all_blocks=1 00:19:22.553 --rc geninfo_unexecuted_blocks=1 00:19:22.553 00:19:22.553 ' 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:22.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.553 --rc genhtml_branch_coverage=1 00:19:22.553 --rc genhtml_function_coverage=1 00:19:22.553 --rc genhtml_legend=1 00:19:22.553 --rc geninfo_all_blocks=1 00:19:22.553 --rc geninfo_unexecuted_blocks=1 00:19:22.553 00:19:22.553 ' 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:22.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.553 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.554 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:22.554 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:22.554 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:22.554 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:29.130 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:29.130 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:29.130 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:29.131 Found net devices under 0000:86:00.0: cvl_0_0 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:29.131 Found net devices under 0000:86:00.1: cvl_0_1 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:29.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:19:29.131 00:19:29.131 --- 10.0.0.2 ping statistics --- 00:19:29.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.131 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:19:29.131 00:19:29.131 --- 10.0.0.1 ping statistics --- 00:19:29.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.131 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2361329 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2361329 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 2361329 ']' 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:29.131 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.131 [2024-11-18 13:02:26.028323] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:19:29.131 [2024-11-18 13:02:26.028387] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.131 [2024-11-18 13:02:26.108374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.131 [2024-11-18 13:02:26.149274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.131 [2024-11-18 13:02:26.149311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.131 [2024-11-18 13:02:26.149319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.131 [2024-11-18 13:02:26.149325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.131 [2024-11-18 13:02:26.149330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.131 [2024-11-18 13:02:26.149919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.131 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:29.131 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:19:29.131 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:29.131 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:29.131 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.131 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.131 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:29.131 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:29.131 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:29.131 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.131 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.131 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.131 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:29.131 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.132 Malloc0 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.132 [2024-11-18 13:02:26.322412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.132 [2024-11-18 13:02:26.346593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.132 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:29.132 [2024-11-18 13:02:26.432445] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:30.511 Initializing NVMe Controllers 00:19:30.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:30.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:30.511 Initialization complete. Launching workers. 00:19:30.511 ======================================================== 00:19:30.511 Latency(us) 00:19:30.511 Device Information : IOPS MiB/s Average min max 00:19:30.511 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32238.58 7281.94 63856.02 00:19:30.511 ======================================================== 00:19:30.511 Total : 129.00 16.12 32238.58 7281.94 63856.02 00:19:30.511 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.511 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:30.511 rmmod nvme_tcp 00:19:30.511 rmmod nvme_fabrics 00:19:30.511 rmmod nvme_keyring 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2361329 ']' 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2361329 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 2361329 ']' 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 2361329 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2361329 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2361329' 00:19:30.511 killing process with pid 2361329 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 2361329 00:19:30.511 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 2361329 00:19:30.772 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:30.772 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:30.772 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:30.772 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:30.772 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:30.772 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:30.772 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:30.772 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:30.772 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:30.772 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.772 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.772 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.683 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:32.683 00:19:32.683 real 0m10.521s 00:19:32.683 user 0m4.067s 00:19:32.683 sys 0m4.899s 00:19:32.683 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:32.683 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:32.683 ************************************ 00:19:32.683 END TEST nvmf_wait_for_buf 00:19:32.683 ************************************ 00:19:32.683 13:02:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:32.683 13:02:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:32.683 13:02:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:32.683 13:02:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:32.683 13:02:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:32.683 13:02:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:39.256 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:39.256 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.256 13:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.256 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.256 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.256 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.256 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.256 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:39.256 Found net devices under 0000:86:00.0: cvl_0_0 00:19:39.256 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.256 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:39.257 Found net devices under 0000:86:00.1: cvl_0_1 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:39.257 ************************************ 00:19:39.257 START TEST nvmf_perf_adq 00:19:39.257 ************************************ 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:39.257 * Looking for test storage... 00:19:39.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:39.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.257 --rc genhtml_branch_coverage=1 00:19:39.257 --rc genhtml_function_coverage=1 00:19:39.257 --rc genhtml_legend=1 00:19:39.257 --rc geninfo_all_blocks=1 00:19:39.257 --rc geninfo_unexecuted_blocks=1 00:19:39.257 00:19:39.257 ' 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:39.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.257 --rc genhtml_branch_coverage=1 00:19:39.257 --rc genhtml_function_coverage=1 00:19:39.257 --rc genhtml_legend=1 00:19:39.257 --rc geninfo_all_blocks=1 00:19:39.257 --rc geninfo_unexecuted_blocks=1 00:19:39.257 00:19:39.257 ' 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:39.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.257 --rc genhtml_branch_coverage=1 00:19:39.257 --rc genhtml_function_coverage=1 00:19:39.257 --rc genhtml_legend=1 00:19:39.257 --rc geninfo_all_blocks=1 00:19:39.257 --rc geninfo_unexecuted_blocks=1 00:19:39.257 00:19:39.257 ' 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:39.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.257 --rc genhtml_branch_coverage=1 00:19:39.257 --rc genhtml_function_coverage=1 00:19:39.257 --rc genhtml_legend=1 00:19:39.257 --rc geninfo_all_blocks=1 00:19:39.257 --rc geninfo_unexecuted_blocks=1 00:19:39.257 00:19:39.257 ' 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.257 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:39.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:39.258 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:44.532 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:44.533 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:44.533 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:44.533 Found net devices under 0000:86:00.0: cvl_0_0 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:44.533 Found net devices under 0000:86:00.1: cvl_0_1 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:44.533 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:45.471 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:47.377 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:52.650 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:52.650 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:52.650 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.650 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:52.650 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:52.650 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:52.650 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.650 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.650 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.650 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:52.650 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:52.650 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:52.650 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:52.650 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:52.651 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:52.651 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:52.651 Found net devices under 0000:86:00.0: cvl_0_0 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:52.651 Found net devices under 0000:86:00.1: cvl_0_1 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:52.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:19:52.651 00:19:52.651 --- 10.0.0.2 ping statistics --- 00:19:52.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.651 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:19:52.651 00:19:52.651 --- 10.0.0.1 ping statistics --- 00:19:52.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.651 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2369684 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:52.651 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2369684 00:19:52.652 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 2369684 ']' 00:19:52.652 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.652 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:52.652 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.652 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:52.652 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.911 [2024-11-18 13:02:50.354153] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:19:52.911 [2024-11-18 13:02:50.354197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.911 [2024-11-18 13:02:50.418955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.911 [2024-11-18 13:02:50.460979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.911 [2024-11-18 13:02:50.461019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.911 [2024-11-18 13:02:50.461026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.911 [2024-11-18 13:02:50.461032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.911 [2024-11-18 13:02:50.461038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.911 [2024-11-18 13:02:50.462547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.911 [2024-11-18 13:02:50.462653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.911 [2024-11-18 13:02:50.462762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.911 [2024-11-18 13:02:50.462763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.911 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:52.911 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:19:52.911 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.911 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.911 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.911 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.911 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:52.911 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:52.911 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:52.911 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.911 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.911 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.171 [2024-11-18 13:02:50.708888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.171 Malloc1 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.171 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:53.172 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.172 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.172 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.172 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:53.172 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.172 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.172 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.172 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.172 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.172 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.172 [2024-11-18 13:02:50.771186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.172 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.172 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2369716 00:19:53.172 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:53.172 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:55.713 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:55.713 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.713 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.713 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.713 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:55.713 "tick_rate": 2300000000, 00:19:55.713 "poll_groups": [ 00:19:55.713 { 00:19:55.713 "name": "nvmf_tgt_poll_group_000", 00:19:55.713 "admin_qpairs": 1, 00:19:55.713 "io_qpairs": 1, 00:19:55.713 "current_admin_qpairs": 1, 00:19:55.713 "current_io_qpairs": 1, 00:19:55.713 "pending_bdev_io": 0, 00:19:55.713 "completed_nvme_io": 19241, 00:19:55.713 "transports": [ 00:19:55.713 { 00:19:55.713 "trtype": "TCP" 00:19:55.713 } 00:19:55.713 ] 00:19:55.713 }, 00:19:55.713 { 00:19:55.713 "name": "nvmf_tgt_poll_group_001", 00:19:55.713 "admin_qpairs": 0, 00:19:55.713 "io_qpairs": 1, 00:19:55.713 "current_admin_qpairs": 0, 00:19:55.713 "current_io_qpairs": 1, 00:19:55.713 "pending_bdev_io": 0, 00:19:55.713 "completed_nvme_io": 19174, 00:19:55.713 "transports": [ 00:19:55.713 { 00:19:55.713 "trtype": "TCP" 00:19:55.713 } 00:19:55.713 ] 00:19:55.713 }, 00:19:55.713 { 00:19:55.713 "name": "nvmf_tgt_poll_group_002", 00:19:55.713 "admin_qpairs": 0, 00:19:55.713 "io_qpairs": 1, 00:19:55.713 "current_admin_qpairs": 0, 00:19:55.713 "current_io_qpairs": 1, 00:19:55.713 "pending_bdev_io": 0, 00:19:55.713 "completed_nvme_io": 19519, 00:19:55.713 "transports": [ 00:19:55.713 { 00:19:55.713 "trtype": "TCP" 00:19:55.713 } 00:19:55.713 ] 00:19:55.713 }, 00:19:55.713 { 00:19:55.713 "name": "nvmf_tgt_poll_group_003", 00:19:55.713 "admin_qpairs": 0, 00:19:55.713 "io_qpairs": 1, 00:19:55.713 "current_admin_qpairs": 0, 00:19:55.713 "current_io_qpairs": 1, 00:19:55.713 "pending_bdev_io": 0, 00:19:55.713 "completed_nvme_io": 19145, 00:19:55.713 "transports": [ 00:19:55.713 { 00:19:55.713 "trtype": "TCP" 00:19:55.713 } 00:19:55.713 ] 00:19:55.713 } 00:19:55.713 ] 00:19:55.713 }' 00:19:55.713 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:55.713 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:55.713 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:55.713 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:55.713 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2369716 00:20:03.833 Initializing NVMe Controllers 00:20:03.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:03.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:03.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:03.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:03.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:03.833 Initialization complete. Launching workers. 00:20:03.833 ======================================================== 00:20:03.833 Latency(us) 00:20:03.833 Device Information : IOPS MiB/s Average min max 00:20:03.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10235.18 39.98 6252.60 2308.56 10299.21 00:20:03.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10335.48 40.37 6192.34 2374.50 10624.85 00:20:03.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10363.58 40.48 6174.98 1890.96 11003.70 00:20:03.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10273.18 40.13 6230.46 2429.73 10940.81 00:20:03.833 ======================================================== 00:20:03.833 Total : 41207.42 160.97 6212.45 1890.96 11003.70 00:20:03.833 00:20:03.833 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:03.833 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:03.833 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:03.833 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:03.833 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:03.833 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:03.833 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:03.833 rmmod nvme_tcp 00:20:03.833 rmmod nvme_fabrics 00:20:03.833 rmmod nvme_keyring 00:20:03.833 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2369684 ']' 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2369684 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 2369684 ']' 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 2369684 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2369684 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2369684' 00:20:03.833 killing process with pid 2369684 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 2369684 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 2369684 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.833 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.740 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:05.740 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:05.740 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:05.740 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:07.119 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:09.026 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:14.312 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:14.313 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:14.313 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:14.313 Found net devices under 0000:86:00.0: cvl_0_0 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:14.313 Found net devices under 0000:86:00.1: cvl_0_1 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:14.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:20:14.313 00:20:14.313 --- 10.0.0.2 ping statistics --- 00:20:14.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.313 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:20:14.313 00:20:14.313 --- 10.0.0.1 ping statistics --- 00:20:14.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.313 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:14.313 net.core.busy_poll = 1 00:20:14.313 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:14.314 net.core.busy_read = 1 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2374012 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2374012 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 2374012 ']' 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:14.314 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.314 [2024-11-18 13:03:11.992471] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:20:14.314 [2024-11-18 13:03:11.992523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.573 [2024-11-18 13:03:12.070977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.573 [2024-11-18 13:03:12.113956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.573 [2024-11-18 13:03:12.113993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.573 [2024-11-18 13:03:12.114001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.573 [2024-11-18 13:03:12.114007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.573 [2024-11-18 13:03:12.114012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.573 [2024-11-18 13:03:12.115606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.573 [2024-11-18 13:03:12.115715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.573 [2024-11-18 13:03:12.115862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.573 [2024-11-18 13:03:12.115863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.573 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.833 [2024-11-18 13:03:12.317591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.833 Malloc1 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.833 [2024-11-18 13:03:12.378332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2374040 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:14.833 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:16.737 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:16.737 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.737 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.737 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.737 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:16.737 "tick_rate": 2300000000, 00:20:16.737 "poll_groups": [ 00:20:16.737 { 00:20:16.737 "name": "nvmf_tgt_poll_group_000", 00:20:16.737 "admin_qpairs": 1, 00:20:16.737 "io_qpairs": 2, 00:20:16.737 "current_admin_qpairs": 1, 00:20:16.737 "current_io_qpairs": 2, 00:20:16.737 "pending_bdev_io": 0, 00:20:16.737 "completed_nvme_io": 27285, 00:20:16.737 "transports": [ 00:20:16.737 { 00:20:16.737 "trtype": "TCP" 00:20:16.737 } 00:20:16.737 ] 00:20:16.737 }, 00:20:16.737 { 00:20:16.737 "name": "nvmf_tgt_poll_group_001", 00:20:16.737 "admin_qpairs": 0, 00:20:16.737 "io_qpairs": 2, 00:20:16.737 "current_admin_qpairs": 0, 00:20:16.737 "current_io_qpairs": 2, 00:20:16.737 "pending_bdev_io": 0, 00:20:16.737 "completed_nvme_io": 27964, 00:20:16.737 "transports": [ 00:20:16.737 { 00:20:16.737 "trtype": "TCP" 00:20:16.737 } 00:20:16.737 ] 00:20:16.737 }, 00:20:16.737 { 00:20:16.737 "name": "nvmf_tgt_poll_group_002", 00:20:16.737 "admin_qpairs": 0, 00:20:16.737 "io_qpairs": 0, 00:20:16.737 "current_admin_qpairs": 0, 00:20:16.737 "current_io_qpairs": 0, 00:20:16.737 "pending_bdev_io": 0, 00:20:16.737 "completed_nvme_io": 0, 00:20:16.737 "transports": [ 00:20:16.737 { 00:20:16.737 "trtype": "TCP" 00:20:16.737 } 00:20:16.737 ] 00:20:16.737 }, 00:20:16.737 { 00:20:16.737 "name": "nvmf_tgt_poll_group_003", 00:20:16.737 "admin_qpairs": 0, 00:20:16.737 "io_qpairs": 0, 00:20:16.737 "current_admin_qpairs": 0, 00:20:16.737 "current_io_qpairs": 0, 00:20:16.737 "pending_bdev_io": 0, 00:20:16.738 "completed_nvme_io": 0, 00:20:16.738 "transports": [ 00:20:16.738 { 00:20:16.738 "trtype": "TCP" 00:20:16.738 } 00:20:16.738 ] 00:20:16.738 } 00:20:16.738 ] 00:20:16.738 }' 00:20:16.738 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:16.738 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:16.996 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:16.996 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:16.996 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2374040 00:20:25.117 Initializing NVMe Controllers 00:20:25.117 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:25.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:25.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:25.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:25.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:25.117 Initialization complete. Launching workers. 00:20:25.117 ======================================================== 00:20:25.117 Latency(us) 00:20:25.117 Device Information : IOPS MiB/s Average min max 00:20:25.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7098.40 27.73 9039.44 1494.10 53437.84 00:20:25.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7403.00 28.92 8644.08 1537.60 52390.70 00:20:25.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7693.50 30.05 8317.58 1520.87 53131.67 00:20:25.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6958.70 27.18 9195.58 1446.93 53401.14 00:20:25.117 ======================================================== 00:20:25.117 Total : 29153.60 113.88 8785.82 1446.93 53437.84 00:20:25.117 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:25.117 rmmod nvme_tcp 00:20:25.117 rmmod nvme_fabrics 00:20:25.117 rmmod nvme_keyring 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2374012 ']' 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2374012 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 2374012 ']' 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 2374012 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2374012 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2374012' 00:20:25.117 killing process with pid 2374012 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 2374012 00:20:25.117 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 2374012 00:20:25.377 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:25.377 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:25.377 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:25.377 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:25.377 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:25.377 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:25.377 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:25.377 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:25.377 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:25.377 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.377 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.377 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.672 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:28.672 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:28.672 00:20:28.672 real 0m49.907s 00:20:28.672 user 2m44.044s 00:20:28.672 sys 0m10.211s 00:20:28.672 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:28.672 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.672 ************************************ 00:20:28.672 END TEST nvmf_perf_adq 00:20:28.672 ************************************ 00:20:28.672 13:03:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:28.672 13:03:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:28.672 13:03:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:28.672 13:03:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.672 ************************************ 00:20:28.672 START TEST nvmf_shutdown 00:20:28.672 ************************************ 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:28.672 * Looking for test storage... 00:20:28.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:28.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.672 --rc genhtml_branch_coverage=1 00:20:28.672 --rc genhtml_function_coverage=1 00:20:28.672 --rc genhtml_legend=1 00:20:28.672 --rc geninfo_all_blocks=1 00:20:28.672 --rc geninfo_unexecuted_blocks=1 00:20:28.672 00:20:28.672 ' 00:20:28.672 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:28.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.672 --rc genhtml_branch_coverage=1 00:20:28.672 --rc genhtml_function_coverage=1 00:20:28.672 --rc genhtml_legend=1 00:20:28.672 --rc geninfo_all_blocks=1 00:20:28.672 --rc geninfo_unexecuted_blocks=1 00:20:28.672 00:20:28.673 ' 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:28.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.673 --rc genhtml_branch_coverage=1 00:20:28.673 --rc genhtml_function_coverage=1 00:20:28.673 --rc genhtml_legend=1 00:20:28.673 --rc geninfo_all_blocks=1 00:20:28.673 --rc geninfo_unexecuted_blocks=1 00:20:28.673 00:20:28.673 ' 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:28.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.673 --rc genhtml_branch_coverage=1 00:20:28.673 --rc genhtml_function_coverage=1 00:20:28.673 --rc genhtml_legend=1 00:20:28.673 --rc geninfo_all_blocks=1 00:20:28.673 --rc geninfo_unexecuted_blocks=1 00:20:28.673 00:20:28.673 ' 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:28.673 ************************************ 00:20:28.673 START TEST nvmf_shutdown_tc1 00:20:28.673 ************************************ 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:28.673 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:35.247 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:35.248 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:35.248 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:35.248 13:03:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:35.248 Found net devices under 0000:86:00.0: cvl_0_0 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:35.248 Found net devices under 0000:86:00.1: cvl_0_1 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:35.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:20:35.248 00:20:35.248 --- 10.0.0.2 ping statistics --- 00:20:35.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.248 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:20:35.248 00:20:35.248 --- 10.0.0.1 ping statistics --- 00:20:35.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.248 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:35.248 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2379489 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2379489 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2379489 ']' 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.249 [2024-11-18 13:03:32.340972] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:20:35.249 [2024-11-18 13:03:32.341018] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.249 [2024-11-18 13:03:32.421172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:35.249 [2024-11-18 13:03:32.463989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.249 [2024-11-18 13:03:32.464027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.249 [2024-11-18 13:03:32.464034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.249 [2024-11-18 13:03:32.464040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.249 [2024-11-18 13:03:32.464046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.249 [2024-11-18 13:03:32.465699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.249 [2024-11-18 13:03:32.465732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.249 [2024-11-18 13:03:32.465841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.249 [2024-11-18 13:03:32.465842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.249 [2024-11-18 13:03:32.597947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.249 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.249 Malloc1 00:20:35.249 [2024-11-18 13:03:32.702935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.249 Malloc2 00:20:35.249 Malloc3 00:20:35.249 Malloc4 00:20:35.249 Malloc5 00:20:35.249 Malloc6 00:20:35.249 Malloc7 00:20:35.510 Malloc8 00:20:35.510 Malloc9 00:20:35.510 Malloc10 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2379762 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2379762 /var/tmp/bdevperf.sock 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2379762 ']' 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.510 { 00:20:35.510 "params": { 00:20:35.510 "name": "Nvme$subsystem", 00:20:35.510 "trtype": "$TEST_TRANSPORT", 00:20:35.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.510 "adrfam": "ipv4", 00:20:35.510 "trsvcid": "$NVMF_PORT", 00:20:35.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.510 "hdgst": ${hdgst:-false}, 00:20:35.510 "ddgst": ${ddgst:-false} 00:20:35.510 }, 00:20:35.510 "method": "bdev_nvme_attach_controller" 00:20:35.510 } 00:20:35.510 EOF 00:20:35.510 )") 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.510 { 00:20:35.510 "params": { 00:20:35.510 "name": "Nvme$subsystem", 00:20:35.510 "trtype": "$TEST_TRANSPORT", 00:20:35.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.510 "adrfam": "ipv4", 00:20:35.510 "trsvcid": "$NVMF_PORT", 00:20:35.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.510 "hdgst": ${hdgst:-false}, 00:20:35.510 "ddgst": ${ddgst:-false} 00:20:35.510 }, 00:20:35.510 "method": "bdev_nvme_attach_controller" 00:20:35.510 } 00:20:35.510 EOF 00:20:35.510 )") 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.510 { 00:20:35.510 "params": { 00:20:35.510 "name": "Nvme$subsystem", 00:20:35.510 "trtype": "$TEST_TRANSPORT", 00:20:35.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.510 "adrfam": "ipv4", 00:20:35.510 "trsvcid": "$NVMF_PORT", 00:20:35.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.510 "hdgst": ${hdgst:-false}, 00:20:35.510 "ddgst": ${ddgst:-false} 00:20:35.510 }, 00:20:35.510 "method": "bdev_nvme_attach_controller" 00:20:35.510 } 00:20:35.510 EOF 00:20:35.510 )") 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.510 { 00:20:35.510 "params": { 00:20:35.510 "name": "Nvme$subsystem", 00:20:35.510 "trtype": "$TEST_TRANSPORT", 00:20:35.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.510 "adrfam": "ipv4", 00:20:35.510 "trsvcid": "$NVMF_PORT", 00:20:35.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.510 "hdgst": ${hdgst:-false}, 00:20:35.510 "ddgst": ${ddgst:-false} 00:20:35.510 }, 00:20:35.510 "method": "bdev_nvme_attach_controller" 00:20:35.510 } 00:20:35.510 EOF 00:20:35.510 )") 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.510 { 00:20:35.510 "params": { 00:20:35.510 "name": "Nvme$subsystem", 00:20:35.510 "trtype": "$TEST_TRANSPORT", 00:20:35.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.510 "adrfam": "ipv4", 00:20:35.510 "trsvcid": "$NVMF_PORT", 00:20:35.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.510 "hdgst": ${hdgst:-false}, 00:20:35.510 "ddgst": ${ddgst:-false} 00:20:35.510 }, 00:20:35.510 "method": "bdev_nvme_attach_controller" 00:20:35.510 } 00:20:35.510 EOF 00:20:35.510 )") 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.510 { 00:20:35.510 "params": { 00:20:35.510 "name": "Nvme$subsystem", 00:20:35.510 "trtype": "$TEST_TRANSPORT", 00:20:35.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.510 "adrfam": "ipv4", 00:20:35.510 "trsvcid": "$NVMF_PORT", 00:20:35.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.510 "hdgst": ${hdgst:-false}, 00:20:35.510 "ddgst": ${ddgst:-false} 00:20:35.510 }, 00:20:35.510 "method": "bdev_nvme_attach_controller" 00:20:35.510 } 00:20:35.510 EOF 00:20:35.510 )") 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.510 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.510 { 00:20:35.510 "params": { 00:20:35.510 "name": "Nvme$subsystem", 00:20:35.510 "trtype": "$TEST_TRANSPORT", 00:20:35.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.511 "adrfam": "ipv4", 00:20:35.511 "trsvcid": "$NVMF_PORT", 00:20:35.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.511 "hdgst": ${hdgst:-false}, 00:20:35.511 "ddgst": ${ddgst:-false} 00:20:35.511 }, 00:20:35.511 "method": "bdev_nvme_attach_controller" 00:20:35.511 } 00:20:35.511 EOF 00:20:35.511 )") 00:20:35.511 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:35.511 [2024-11-18 13:03:33.185544] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:20:35.511 [2024-11-18 13:03:33.185592] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:35.511 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.511 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.511 { 00:20:35.511 "params": { 00:20:35.511 "name": "Nvme$subsystem", 00:20:35.511 "trtype": "$TEST_TRANSPORT", 00:20:35.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.511 "adrfam": "ipv4", 00:20:35.511 "trsvcid": "$NVMF_PORT", 00:20:35.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.511 "hdgst": ${hdgst:-false}, 00:20:35.511 "ddgst": ${ddgst:-false} 00:20:35.511 }, 00:20:35.511 "method": "bdev_nvme_attach_controller" 00:20:35.511 } 00:20:35.511 EOF 00:20:35.511 )") 00:20:35.511 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:35.511 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.511 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.511 { 00:20:35.511 "params": { 00:20:35.511 "name": "Nvme$subsystem", 00:20:35.511 "trtype": "$TEST_TRANSPORT", 00:20:35.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.511 "adrfam": "ipv4", 00:20:35.511 "trsvcid": "$NVMF_PORT", 00:20:35.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.511 "hdgst": ${hdgst:-false}, 00:20:35.511 "ddgst": ${ddgst:-false} 00:20:35.511 }, 00:20:35.511 "method": "bdev_nvme_attach_controller" 00:20:35.511 } 00:20:35.511 EOF 00:20:35.511 )") 00:20:35.511 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:35.511 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:35.511 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:35.511 { 00:20:35.511 "params": { 00:20:35.511 "name": "Nvme$subsystem", 00:20:35.511 "trtype": "$TEST_TRANSPORT", 00:20:35.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.511 "adrfam": "ipv4", 00:20:35.511 "trsvcid": "$NVMF_PORT", 00:20:35.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.511 "hdgst": ${hdgst:-false}, 00:20:35.511 "ddgst": ${ddgst:-false} 00:20:35.511 }, 00:20:35.511 "method": "bdev_nvme_attach_controller" 00:20:35.511 } 00:20:35.511 EOF 00:20:35.511 )") 00:20:35.511 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:35.511 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:35.771 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:35.771 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:35.771 "params": { 00:20:35.771 "name": "Nvme1", 00:20:35.771 "trtype": "tcp", 00:20:35.771 "traddr": "10.0.0.2", 00:20:35.771 "adrfam": "ipv4", 00:20:35.771 "trsvcid": "4420", 00:20:35.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.771 "hdgst": false, 00:20:35.771 "ddgst": false 00:20:35.771 }, 00:20:35.771 "method": "bdev_nvme_attach_controller" 00:20:35.771 },{ 00:20:35.771 "params": { 00:20:35.771 "name": "Nvme2", 00:20:35.771 "trtype": "tcp", 00:20:35.771 "traddr": "10.0.0.2", 00:20:35.771 "adrfam": "ipv4", 00:20:35.771 "trsvcid": "4420", 00:20:35.771 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:35.771 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:35.771 "hdgst": false, 00:20:35.771 "ddgst": false 00:20:35.771 }, 00:20:35.771 "method": "bdev_nvme_attach_controller" 00:20:35.771 },{ 00:20:35.771 "params": { 00:20:35.771 "name": "Nvme3", 00:20:35.771 "trtype": "tcp", 00:20:35.771 "traddr": "10.0.0.2", 00:20:35.771 "adrfam": "ipv4", 00:20:35.771 "trsvcid": "4420", 00:20:35.771 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:35.771 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:35.771 "hdgst": false, 00:20:35.771 "ddgst": false 00:20:35.771 }, 00:20:35.771 "method": "bdev_nvme_attach_controller" 00:20:35.771 },{ 00:20:35.771 "params": { 00:20:35.771 "name": "Nvme4", 00:20:35.771 "trtype": "tcp", 00:20:35.771 "traddr": "10.0.0.2", 00:20:35.771 "adrfam": "ipv4", 00:20:35.771 "trsvcid": "4420", 00:20:35.771 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:35.771 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:35.771 "hdgst": false, 00:20:35.771 "ddgst": false 00:20:35.771 }, 00:20:35.771 "method": "bdev_nvme_attach_controller" 00:20:35.771 },{ 00:20:35.771 "params": { 00:20:35.771 "name": "Nvme5", 00:20:35.771 "trtype": "tcp", 00:20:35.771 "traddr": "10.0.0.2", 00:20:35.771 "adrfam": "ipv4", 00:20:35.771 "trsvcid": "4420", 00:20:35.771 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:35.771 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:35.771 "hdgst": false, 00:20:35.771 "ddgst": false 00:20:35.771 }, 00:20:35.771 "method": "bdev_nvme_attach_controller" 00:20:35.771 },{ 00:20:35.771 "params": { 00:20:35.771 "name": "Nvme6", 00:20:35.771 "trtype": "tcp", 00:20:35.771 "traddr": "10.0.0.2", 00:20:35.771 "adrfam": "ipv4", 00:20:35.771 "trsvcid": "4420", 00:20:35.771 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:35.771 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:35.771 "hdgst": false, 00:20:35.771 "ddgst": false 00:20:35.771 }, 00:20:35.771 "method": "bdev_nvme_attach_controller" 00:20:35.771 },{ 00:20:35.771 "params": { 00:20:35.771 "name": "Nvme7", 00:20:35.771 "trtype": "tcp", 00:20:35.771 "traddr": "10.0.0.2", 00:20:35.771 "adrfam": "ipv4", 00:20:35.771 "trsvcid": "4420", 00:20:35.771 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:35.771 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:35.771 "hdgst": false, 00:20:35.771 "ddgst": false 00:20:35.771 }, 00:20:35.771 "method": "bdev_nvme_attach_controller" 00:20:35.771 },{ 00:20:35.771 "params": { 00:20:35.771 "name": "Nvme8", 00:20:35.771 "trtype": "tcp", 00:20:35.771 "traddr": "10.0.0.2", 00:20:35.771 "adrfam": "ipv4", 00:20:35.772 "trsvcid": "4420", 00:20:35.772 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:35.772 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:35.772 "hdgst": false, 00:20:35.772 "ddgst": false 00:20:35.772 }, 00:20:35.772 "method": "bdev_nvme_attach_controller" 00:20:35.772 },{ 00:20:35.772 "params": { 00:20:35.772 "name": "Nvme9", 00:20:35.772 "trtype": "tcp", 00:20:35.772 "traddr": "10.0.0.2", 00:20:35.772 "adrfam": "ipv4", 00:20:35.772 "trsvcid": "4420", 00:20:35.772 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:35.772 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:35.772 "hdgst": false, 00:20:35.772 "ddgst": false 00:20:35.772 }, 00:20:35.772 "method": "bdev_nvme_attach_controller" 00:20:35.772 },{ 00:20:35.772 "params": { 00:20:35.772 "name": "Nvme10", 00:20:35.772 "trtype": "tcp", 00:20:35.772 "traddr": "10.0.0.2", 00:20:35.772 "adrfam": "ipv4", 00:20:35.772 "trsvcid": "4420", 00:20:35.772 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:35.772 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:35.772 "hdgst": false, 00:20:35.772 "ddgst": false 00:20:35.772 }, 00:20:35.772 "method": "bdev_nvme_attach_controller" 00:20:35.772 }' 00:20:35.772 [2024-11-18 13:03:33.263942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.772 [2024-11-18 13:03:33.305161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.151 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:37.151 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:20:37.151 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:37.151 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.151 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:37.151 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.151 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2379762 00:20:37.151 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:37.151 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:38.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2379762 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:38.532 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2379489 00:20:38.532 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:38.532 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:38.532 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:38.532 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:38.532 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.532 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.532 { 00:20:38.532 "params": { 00:20:38.532 "name": "Nvme$subsystem", 00:20:38.532 "trtype": "$TEST_TRANSPORT", 00:20:38.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.532 "adrfam": "ipv4", 00:20:38.532 "trsvcid": "$NVMF_PORT", 00:20:38.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.532 "hdgst": ${hdgst:-false}, 00:20:38.532 "ddgst": ${ddgst:-false} 00:20:38.532 }, 00:20:38.532 "method": "bdev_nvme_attach_controller" 00:20:38.532 } 00:20:38.532 EOF 00:20:38.532 )") 00:20:38.532 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.532 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.532 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.532 { 00:20:38.532 "params": { 00:20:38.533 "name": "Nvme$subsystem", 00:20:38.533 "trtype": "$TEST_TRANSPORT", 00:20:38.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.533 "adrfam": "ipv4", 00:20:38.533 "trsvcid": "$NVMF_PORT", 00:20:38.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.533 "hdgst": ${hdgst:-false}, 00:20:38.533 "ddgst": ${ddgst:-false} 00:20:38.533 }, 00:20:38.533 "method": "bdev_nvme_attach_controller" 00:20:38.533 } 00:20:38.533 EOF 00:20:38.533 )") 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.533 { 00:20:38.533 "params": { 00:20:38.533 "name": "Nvme$subsystem", 00:20:38.533 "trtype": "$TEST_TRANSPORT", 00:20:38.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.533 "adrfam": "ipv4", 00:20:38.533 "trsvcid": "$NVMF_PORT", 00:20:38.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.533 "hdgst": ${hdgst:-false}, 00:20:38.533 "ddgst": ${ddgst:-false} 00:20:38.533 }, 00:20:38.533 "method": "bdev_nvme_attach_controller" 00:20:38.533 } 00:20:38.533 EOF 00:20:38.533 )") 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.533 { 00:20:38.533 "params": { 00:20:38.533 "name": "Nvme$subsystem", 00:20:38.533 "trtype": "$TEST_TRANSPORT", 00:20:38.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.533 "adrfam": "ipv4", 00:20:38.533 "trsvcid": "$NVMF_PORT", 00:20:38.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.533 "hdgst": ${hdgst:-false}, 00:20:38.533 "ddgst": ${ddgst:-false} 00:20:38.533 }, 00:20:38.533 "method": "bdev_nvme_attach_controller" 00:20:38.533 } 00:20:38.533 EOF 00:20:38.533 )") 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.533 { 00:20:38.533 "params": { 00:20:38.533 "name": "Nvme$subsystem", 00:20:38.533 "trtype": "$TEST_TRANSPORT", 00:20:38.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.533 "adrfam": "ipv4", 00:20:38.533 "trsvcid": "$NVMF_PORT", 00:20:38.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.533 "hdgst": ${hdgst:-false}, 00:20:38.533 "ddgst": ${ddgst:-false} 00:20:38.533 }, 00:20:38.533 "method": "bdev_nvme_attach_controller" 00:20:38.533 } 00:20:38.533 EOF 00:20:38.533 )") 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.533 { 00:20:38.533 "params": { 00:20:38.533 "name": "Nvme$subsystem", 00:20:38.533 "trtype": "$TEST_TRANSPORT", 00:20:38.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.533 "adrfam": "ipv4", 00:20:38.533 "trsvcid": "$NVMF_PORT", 00:20:38.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.533 "hdgst": ${hdgst:-false}, 00:20:38.533 "ddgst": ${ddgst:-false} 00:20:38.533 }, 00:20:38.533 "method": "bdev_nvme_attach_controller" 00:20:38.533 } 00:20:38.533 EOF 00:20:38.533 )") 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.533 { 00:20:38.533 "params": { 00:20:38.533 "name": "Nvme$subsystem", 00:20:38.533 "trtype": "$TEST_TRANSPORT", 00:20:38.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.533 "adrfam": "ipv4", 00:20:38.533 "trsvcid": "$NVMF_PORT", 00:20:38.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.533 "hdgst": ${hdgst:-false}, 00:20:38.533 "ddgst": ${ddgst:-false} 00:20:38.533 }, 00:20:38.533 "method": "bdev_nvme_attach_controller" 00:20:38.533 } 00:20:38.533 EOF 00:20:38.533 )") 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.533 [2024-11-18 13:03:35.874056] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:20:38.533 [2024-11-18 13:03:35.874107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380245 ] 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.533 { 00:20:38.533 "params": { 00:20:38.533 "name": "Nvme$subsystem", 00:20:38.533 "trtype": "$TEST_TRANSPORT", 00:20:38.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.533 "adrfam": "ipv4", 00:20:38.533 "trsvcid": "$NVMF_PORT", 00:20:38.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.533 "hdgst": ${hdgst:-false}, 00:20:38.533 "ddgst": ${ddgst:-false} 00:20:38.533 }, 00:20:38.533 "method": "bdev_nvme_attach_controller" 00:20:38.533 } 00:20:38.533 EOF 00:20:38.533 )") 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.533 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.533 { 00:20:38.533 "params": { 00:20:38.533 "name": "Nvme$subsystem", 00:20:38.533 "trtype": "$TEST_TRANSPORT", 00:20:38.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.533 "adrfam": "ipv4", 00:20:38.533 "trsvcid": "$NVMF_PORT", 00:20:38.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.533 "hdgst": ${hdgst:-false}, 00:20:38.533 "ddgst": ${ddgst:-false} 00:20:38.533 }, 00:20:38.534 "method": "bdev_nvme_attach_controller" 00:20:38.534 } 00:20:38.534 EOF 00:20:38.534 )") 00:20:38.534 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.534 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.534 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.534 { 00:20:38.534 "params": { 00:20:38.534 "name": "Nvme$subsystem", 00:20:38.534 "trtype": "$TEST_TRANSPORT", 00:20:38.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.534 "adrfam": "ipv4", 00:20:38.534 "trsvcid": "$NVMF_PORT", 00:20:38.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.534 "hdgst": ${hdgst:-false}, 00:20:38.534 "ddgst": ${ddgst:-false} 00:20:38.534 }, 00:20:38.534 "method": "bdev_nvme_attach_controller" 00:20:38.534 } 00:20:38.534 EOF 00:20:38.534 )") 00:20:38.534 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:38.534 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:38.534 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:38.534 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:38.534 "params": { 00:20:38.534 "name": "Nvme1", 00:20:38.534 "trtype": "tcp", 00:20:38.534 "traddr": "10.0.0.2", 00:20:38.534 "adrfam": "ipv4", 00:20:38.534 "trsvcid": "4420", 00:20:38.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.534 "hdgst": false, 00:20:38.534 "ddgst": false 00:20:38.534 }, 00:20:38.534 "method": "bdev_nvme_attach_controller" 00:20:38.534 },{ 00:20:38.534 "params": { 00:20:38.534 "name": "Nvme2", 00:20:38.534 "trtype": "tcp", 00:20:38.534 "traddr": "10.0.0.2", 00:20:38.534 "adrfam": "ipv4", 00:20:38.534 "trsvcid": "4420", 00:20:38.534 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:38.534 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:38.534 "hdgst": false, 00:20:38.534 "ddgst": false 00:20:38.534 }, 00:20:38.534 "method": "bdev_nvme_attach_controller" 00:20:38.534 },{ 00:20:38.534 "params": { 00:20:38.534 "name": "Nvme3", 00:20:38.534 "trtype": "tcp", 00:20:38.534 "traddr": "10.0.0.2", 00:20:38.534 "adrfam": "ipv4", 00:20:38.534 "trsvcid": "4420", 00:20:38.534 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:38.534 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:38.534 "hdgst": false, 00:20:38.534 "ddgst": false 00:20:38.534 }, 00:20:38.534 "method": "bdev_nvme_attach_controller" 00:20:38.534 },{ 00:20:38.534 "params": { 00:20:38.534 "name": "Nvme4", 00:20:38.534 "trtype": "tcp", 00:20:38.534 "traddr": "10.0.0.2", 00:20:38.534 "adrfam": "ipv4", 00:20:38.534 "trsvcid": "4420", 00:20:38.534 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:38.534 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:38.534 "hdgst": false, 00:20:38.534 "ddgst": false 00:20:38.534 }, 00:20:38.534 "method": "bdev_nvme_attach_controller" 00:20:38.534 },{ 00:20:38.534 "params": { 00:20:38.534 "name": "Nvme5", 00:20:38.534 "trtype": "tcp", 00:20:38.534 "traddr": "10.0.0.2", 00:20:38.534 "adrfam": "ipv4", 00:20:38.534 "trsvcid": "4420", 00:20:38.534 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:38.534 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:38.534 "hdgst": false, 00:20:38.534 "ddgst": false 00:20:38.534 }, 00:20:38.534 "method": "bdev_nvme_attach_controller" 00:20:38.534 },{ 00:20:38.534 "params": { 00:20:38.534 "name": "Nvme6", 00:20:38.534 "trtype": "tcp", 00:20:38.534 "traddr": "10.0.0.2", 00:20:38.534 "adrfam": "ipv4", 00:20:38.534 "trsvcid": "4420", 00:20:38.534 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:38.534 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:38.534 "hdgst": false, 00:20:38.534 "ddgst": false 00:20:38.534 }, 00:20:38.534 "method": "bdev_nvme_attach_controller" 00:20:38.534 },{ 00:20:38.534 "params": { 00:20:38.534 "name": "Nvme7", 00:20:38.534 "trtype": "tcp", 00:20:38.534 "traddr": "10.0.0.2", 00:20:38.534 "adrfam": "ipv4", 00:20:38.534 "trsvcid": "4420", 00:20:38.534 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:38.534 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:38.534 "hdgst": false, 00:20:38.534 "ddgst": false 00:20:38.534 }, 00:20:38.534 "method": "bdev_nvme_attach_controller" 00:20:38.534 },{ 00:20:38.534 "params": { 00:20:38.534 "name": "Nvme8", 00:20:38.534 "trtype": "tcp", 00:20:38.534 "traddr": "10.0.0.2", 00:20:38.534 "adrfam": "ipv4", 00:20:38.534 "trsvcid": "4420", 00:20:38.534 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:38.534 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:38.534 "hdgst": false, 00:20:38.534 "ddgst": false 00:20:38.534 }, 00:20:38.534 "method": "bdev_nvme_attach_controller" 00:20:38.534 },{ 00:20:38.534 "params": { 00:20:38.534 "name": "Nvme9", 00:20:38.534 "trtype": "tcp", 00:20:38.534 "traddr": "10.0.0.2", 00:20:38.534 "adrfam": "ipv4", 00:20:38.534 "trsvcid": "4420", 00:20:38.534 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:38.534 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:38.534 "hdgst": false, 00:20:38.534 "ddgst": false 00:20:38.534 }, 00:20:38.534 "method": "bdev_nvme_attach_controller" 00:20:38.534 },{ 00:20:38.534 "params": { 00:20:38.534 "name": "Nvme10", 00:20:38.534 "trtype": "tcp", 00:20:38.534 "traddr": "10.0.0.2", 00:20:38.534 "adrfam": "ipv4", 00:20:38.534 "trsvcid": "4420", 00:20:38.534 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:38.534 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:38.534 "hdgst": false, 00:20:38.534 "ddgst": false 00:20:38.534 }, 00:20:38.534 "method": "bdev_nvme_attach_controller" 00:20:38.534 }' 00:20:38.534 [2024-11-18 13:03:35.952575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.534 [2024-11-18 13:03:35.994038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.912 Running I/O for 1 seconds... 00:20:41.105 2189.00 IOPS, 136.81 MiB/s 00:20:41.105 Latency(us) 00:20:41.105 [2024-11-18T12:03:38.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.105 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.106 Verification LBA range: start 0x0 length 0x400 00:20:41.106 Nvme1n1 : 1.16 275.84 17.24 0.00 0.00 229822.29 14930.81 227039.50 00:20:41.106 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.106 Verification LBA range: start 0x0 length 0x400 00:20:41.106 Nvme2n1 : 1.16 275.16 17.20 0.00 0.00 224684.83 16298.52 217009.64 00:20:41.106 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.106 Verification LBA range: start 0x0 length 0x400 00:20:41.106 Nvme3n1 : 1.15 286.44 17.90 0.00 0.00 213447.64 19603.81 205156.17 00:20:41.106 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.106 Verification LBA range: start 0x0 length 0x400 00:20:41.106 Nvme4n1 : 1.15 277.66 17.35 0.00 0.00 218736.37 15386.71 217009.64 00:20:41.106 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.106 Verification LBA range: start 0x0 length 0x400 00:20:41.106 Nvme5n1 : 1.14 228.00 14.25 0.00 0.00 257282.28 16982.37 232510.33 00:20:41.106 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.106 Verification LBA range: start 0x0 length 0x400 00:20:41.106 Nvme6n1 : 1.17 272.55 17.03 0.00 0.00 216342.17 17096.35 229774.91 00:20:41.106 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.106 Verification LBA range: start 0x0 length 0x400 00:20:41.106 Nvme7n1 : 1.17 272.73 17.05 0.00 0.00 213364.91 13107.20 228863.11 00:20:41.106 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.106 Verification LBA range: start 0x0 length 0x400 00:20:41.106 Nvme8n1 : 1.17 274.13 17.13 0.00 0.00 209065.54 16868.40 233422.14 00:20:41.106 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.106 Verification LBA range: start 0x0 length 0x400 00:20:41.106 Nvme9n1 : 1.18 271.56 16.97 0.00 0.00 208091.40 16526.47 230686.72 00:20:41.106 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.106 Verification LBA range: start 0x0 length 0x400 00:20:41.106 Nvme10n1 : 1.18 270.91 16.93 0.00 0.00 205189.83 13734.07 248011.02 00:20:41.106 [2024-11-18T12:03:38.808Z] =================================================================================================================== 00:20:41.106 [2024-11-18T12:03:38.808Z] Total : 2704.97 169.06 0.00 0.00 218867.20 13107.20 248011.02 00:20:41.106 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:41.106 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:41.106 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:41.106 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:41.106 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:41.106 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:41.106 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:41.106 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:41.106 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:41.106 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:41.106 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:41.106 rmmod nvme_tcp 00:20:41.106 rmmod nvme_fabrics 00:20:41.365 rmmod nvme_keyring 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2379489 ']' 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2379489 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 2379489 ']' 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 2379489 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2379489 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2379489' 00:20:41.365 killing process with pid 2379489 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 2379489 00:20:41.365 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 2379489 00:20:41.627 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:41.627 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:41.627 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:41.627 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:41.627 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:41.627 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:41.627 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:41.627 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:41.627 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:41.627 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.627 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.627 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:44.165 00:20:44.165 real 0m15.066s 00:20:44.165 user 0m32.775s 00:20:44.165 sys 0m5.875s 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:44.165 ************************************ 00:20:44.165 END TEST nvmf_shutdown_tc1 00:20:44.165 ************************************ 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:44.165 ************************************ 00:20:44.165 START TEST nvmf_shutdown_tc2 00:20:44.165 ************************************ 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:44.165 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:44.166 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:44.166 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:44.166 Found net devices under 0000:86:00.0: cvl_0_0 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:44.166 Found net devices under 0000:86:00.1: cvl_0_1 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.166 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:44.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:20:44.167 00:20:44.167 --- 10.0.0.2 ping statistics --- 00:20:44.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.167 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:20:44.167 00:20:44.167 --- 10.0.0.1 ping statistics --- 00:20:44.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.167 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2381271 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2381271 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2381271 ']' 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:44.167 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:44.167 [2024-11-18 13:03:41.778568] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:20:44.167 [2024-11-18 13:03:41.778612] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.167 [2024-11-18 13:03:41.859737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.426 [2024-11-18 13:03:41.902160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.426 [2024-11-18 13:03:41.902200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.426 [2024-11-18 13:03:41.902207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.426 [2024-11-18 13:03:41.902213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.426 [2024-11-18 13:03:41.902218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.426 [2024-11-18 13:03:41.903885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.426 [2024-11-18 13:03:41.903919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.426 [2024-11-18 13:03:41.904029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.426 [2024-11-18 13:03:41.904030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:44.995 [2024-11-18 13:03:42.670469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.995 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:44.996 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.996 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.255 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.255 Malloc1 00:20:45.255 [2024-11-18 13:03:42.776971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.255 Malloc2 00:20:45.255 Malloc3 00:20:45.255 Malloc4 00:20:45.255 Malloc5 00:20:45.514 Malloc6 00:20:45.514 Malloc7 00:20:45.514 Malloc8 00:20:45.514 Malloc9 00:20:45.514 Malloc10 00:20:45.514 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.514 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:45.514 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:45.514 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.514 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2381545 00:20:45.514 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2381545 /var/tmp/bdevperf.sock 00:20:45.515 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2381545 ']' 00:20:45.515 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.515 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:45.515 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:45.515 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:45.515 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.515 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:45.515 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:45.515 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:45.515 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.515 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.515 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.515 { 00:20:45.515 "params": { 00:20:45.515 "name": "Nvme$subsystem", 00:20:45.515 "trtype": "$TEST_TRANSPORT", 00:20:45.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.515 "adrfam": "ipv4", 00:20:45.515 "trsvcid": "$NVMF_PORT", 00:20:45.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.515 "hdgst": ${hdgst:-false}, 00:20:45.515 "ddgst": ${ddgst:-false} 00:20:45.515 }, 00:20:45.515 "method": "bdev_nvme_attach_controller" 00:20:45.515 } 00:20:45.515 EOF 00:20:45.515 )") 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.776 { 00:20:45.776 "params": { 00:20:45.776 "name": "Nvme$subsystem", 00:20:45.776 "trtype": "$TEST_TRANSPORT", 00:20:45.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.776 "adrfam": "ipv4", 00:20:45.776 "trsvcid": "$NVMF_PORT", 00:20:45.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.776 "hdgst": ${hdgst:-false}, 00:20:45.776 "ddgst": ${ddgst:-false} 00:20:45.776 }, 00:20:45.776 "method": "bdev_nvme_attach_controller" 00:20:45.776 } 00:20:45.776 EOF 00:20:45.776 )") 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.776 { 00:20:45.776 "params": { 00:20:45.776 "name": "Nvme$subsystem", 00:20:45.776 "trtype": "$TEST_TRANSPORT", 00:20:45.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.776 "adrfam": "ipv4", 00:20:45.776 "trsvcid": "$NVMF_PORT", 00:20:45.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.776 "hdgst": ${hdgst:-false}, 00:20:45.776 "ddgst": ${ddgst:-false} 00:20:45.776 }, 00:20:45.776 "method": "bdev_nvme_attach_controller" 00:20:45.776 } 00:20:45.776 EOF 00:20:45.776 )") 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.776 { 00:20:45.776 "params": { 00:20:45.776 "name": "Nvme$subsystem", 00:20:45.776 "trtype": "$TEST_TRANSPORT", 00:20:45.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.776 "adrfam": "ipv4", 00:20:45.776 "trsvcid": "$NVMF_PORT", 00:20:45.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.776 "hdgst": ${hdgst:-false}, 00:20:45.776 "ddgst": ${ddgst:-false} 00:20:45.776 }, 00:20:45.776 "method": "bdev_nvme_attach_controller" 00:20:45.776 } 00:20:45.776 EOF 00:20:45.776 )") 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.776 { 00:20:45.776 "params": { 00:20:45.776 "name": "Nvme$subsystem", 00:20:45.776 "trtype": "$TEST_TRANSPORT", 00:20:45.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.776 "adrfam": "ipv4", 00:20:45.776 "trsvcid": "$NVMF_PORT", 00:20:45.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.776 "hdgst": ${hdgst:-false}, 00:20:45.776 "ddgst": ${ddgst:-false} 00:20:45.776 }, 00:20:45.776 "method": "bdev_nvme_attach_controller" 00:20:45.776 } 00:20:45.776 EOF 00:20:45.776 )") 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.776 { 00:20:45.776 "params": { 00:20:45.776 "name": "Nvme$subsystem", 00:20:45.776 "trtype": "$TEST_TRANSPORT", 00:20:45.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.776 "adrfam": "ipv4", 00:20:45.776 "trsvcid": "$NVMF_PORT", 00:20:45.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.776 "hdgst": ${hdgst:-false}, 00:20:45.776 "ddgst": ${ddgst:-false} 00:20:45.776 }, 00:20:45.776 "method": "bdev_nvme_attach_controller" 00:20:45.776 } 00:20:45.776 EOF 00:20:45.776 )") 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.776 { 00:20:45.776 "params": { 00:20:45.776 "name": "Nvme$subsystem", 00:20:45.776 "trtype": "$TEST_TRANSPORT", 00:20:45.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.776 "adrfam": "ipv4", 00:20:45.776 "trsvcid": "$NVMF_PORT", 00:20:45.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.776 "hdgst": ${hdgst:-false}, 00:20:45.776 "ddgst": ${ddgst:-false} 00:20:45.776 }, 00:20:45.776 "method": "bdev_nvme_attach_controller" 00:20:45.776 } 00:20:45.776 EOF 00:20:45.776 )") 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:45.776 [2024-11-18 13:03:43.255692] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:20:45.776 [2024-11-18 13:03:43.255740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381545 ] 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.776 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.776 { 00:20:45.776 "params": { 00:20:45.777 "name": "Nvme$subsystem", 00:20:45.777 "trtype": "$TEST_TRANSPORT", 00:20:45.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.777 "adrfam": "ipv4", 00:20:45.777 "trsvcid": "$NVMF_PORT", 00:20:45.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.777 "hdgst": ${hdgst:-false}, 00:20:45.777 "ddgst": ${ddgst:-false} 00:20:45.777 }, 00:20:45.777 "method": "bdev_nvme_attach_controller" 00:20:45.777 } 00:20:45.777 EOF 00:20:45.777 )") 00:20:45.777 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:45.777 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.777 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.777 { 00:20:45.777 "params": { 00:20:45.777 "name": "Nvme$subsystem", 00:20:45.777 "trtype": "$TEST_TRANSPORT", 00:20:45.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.777 "adrfam": "ipv4", 00:20:45.777 "trsvcid": "$NVMF_PORT", 00:20:45.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.777 "hdgst": ${hdgst:-false}, 00:20:45.777 "ddgst": ${ddgst:-false} 00:20:45.777 }, 00:20:45.777 "method": "bdev_nvme_attach_controller" 00:20:45.777 } 00:20:45.777 EOF 00:20:45.777 )") 00:20:45.777 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:45.777 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.777 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.777 { 00:20:45.777 "params": { 00:20:45.777 "name": "Nvme$subsystem", 00:20:45.777 "trtype": "$TEST_TRANSPORT", 00:20:45.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.777 "adrfam": "ipv4", 00:20:45.777 "trsvcid": "$NVMF_PORT", 00:20:45.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.777 "hdgst": ${hdgst:-false}, 00:20:45.777 "ddgst": ${ddgst:-false} 00:20:45.777 }, 00:20:45.777 "method": "bdev_nvme_attach_controller" 00:20:45.777 } 00:20:45.777 EOF 00:20:45.777 )") 00:20:45.777 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:45.777 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:45.777 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:45.777 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:45.777 "params": { 00:20:45.777 "name": "Nvme1", 00:20:45.777 "trtype": "tcp", 00:20:45.777 "traddr": "10.0.0.2", 00:20:45.777 "adrfam": "ipv4", 00:20:45.777 "trsvcid": "4420", 00:20:45.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.777 "hdgst": false, 00:20:45.777 "ddgst": false 00:20:45.777 }, 00:20:45.777 "method": "bdev_nvme_attach_controller" 00:20:45.777 },{ 00:20:45.777 "params": { 00:20:45.777 "name": "Nvme2", 00:20:45.777 "trtype": "tcp", 00:20:45.777 "traddr": "10.0.0.2", 00:20:45.777 "adrfam": "ipv4", 00:20:45.777 "trsvcid": "4420", 00:20:45.777 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:45.777 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:45.777 "hdgst": false, 00:20:45.777 "ddgst": false 00:20:45.777 }, 00:20:45.777 "method": "bdev_nvme_attach_controller" 00:20:45.777 },{ 00:20:45.777 "params": { 00:20:45.777 "name": "Nvme3", 00:20:45.777 "trtype": "tcp", 00:20:45.777 "traddr": "10.0.0.2", 00:20:45.777 "adrfam": "ipv4", 00:20:45.777 "trsvcid": "4420", 00:20:45.777 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:45.777 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:45.777 "hdgst": false, 00:20:45.777 "ddgst": false 00:20:45.777 }, 00:20:45.777 "method": "bdev_nvme_attach_controller" 00:20:45.777 },{ 00:20:45.777 "params": { 00:20:45.777 "name": "Nvme4", 00:20:45.777 "trtype": "tcp", 00:20:45.777 "traddr": "10.0.0.2", 00:20:45.777 "adrfam": "ipv4", 00:20:45.777 "trsvcid": "4420", 00:20:45.777 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:45.777 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:45.777 "hdgst": false, 00:20:45.777 "ddgst": false 00:20:45.777 }, 00:20:45.777 "method": "bdev_nvme_attach_controller" 00:20:45.777 },{ 00:20:45.777 "params": { 00:20:45.777 "name": "Nvme5", 00:20:45.777 "trtype": "tcp", 00:20:45.777 "traddr": "10.0.0.2", 00:20:45.777 "adrfam": "ipv4", 00:20:45.777 "trsvcid": "4420", 00:20:45.777 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:45.777 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:45.777 "hdgst": false, 00:20:45.777 "ddgst": false 00:20:45.777 }, 00:20:45.777 "method": "bdev_nvme_attach_controller" 00:20:45.777 },{ 00:20:45.777 "params": { 00:20:45.777 "name": "Nvme6", 00:20:45.777 "trtype": "tcp", 00:20:45.777 "traddr": "10.0.0.2", 00:20:45.777 "adrfam": "ipv4", 00:20:45.777 "trsvcid": "4420", 00:20:45.777 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:45.777 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:45.777 "hdgst": false, 00:20:45.777 "ddgst": false 00:20:45.777 }, 00:20:45.777 "method": "bdev_nvme_attach_controller" 00:20:45.777 },{ 00:20:45.777 "params": { 00:20:45.777 "name": "Nvme7", 00:20:45.777 "trtype": "tcp", 00:20:45.777 "traddr": "10.0.0.2", 00:20:45.777 "adrfam": "ipv4", 00:20:45.777 "trsvcid": "4420", 00:20:45.777 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:45.777 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:45.777 "hdgst": false, 00:20:45.777 "ddgst": false 00:20:45.777 }, 00:20:45.777 "method": "bdev_nvme_attach_controller" 00:20:45.777 },{ 00:20:45.777 "params": { 00:20:45.777 "name": "Nvme8", 00:20:45.777 "trtype": "tcp", 00:20:45.777 "traddr": "10.0.0.2", 00:20:45.777 "adrfam": "ipv4", 00:20:45.777 "trsvcid": "4420", 00:20:45.777 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:45.777 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:45.777 "hdgst": false, 00:20:45.777 "ddgst": false 00:20:45.777 }, 00:20:45.777 "method": "bdev_nvme_attach_controller" 00:20:45.777 },{ 00:20:45.777 "params": { 00:20:45.777 "name": "Nvme9", 00:20:45.777 "trtype": "tcp", 00:20:45.777 "traddr": "10.0.0.2", 00:20:45.777 "adrfam": "ipv4", 00:20:45.777 "trsvcid": "4420", 00:20:45.777 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:45.777 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:45.777 "hdgst": false, 00:20:45.778 "ddgst": false 00:20:45.778 }, 00:20:45.778 "method": "bdev_nvme_attach_controller" 00:20:45.778 },{ 00:20:45.778 "params": { 00:20:45.778 "name": "Nvme10", 00:20:45.778 "trtype": "tcp", 00:20:45.778 "traddr": "10.0.0.2", 00:20:45.778 "adrfam": "ipv4", 00:20:45.778 "trsvcid": "4420", 00:20:45.778 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:45.778 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:45.778 "hdgst": false, 00:20:45.778 "ddgst": false 00:20:45.778 }, 00:20:45.778 "method": "bdev_nvme_attach_controller" 00:20:45.778 }' 00:20:45.778 [2024-11-18 13:03:43.334117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.778 [2024-11-18 13:03:43.375812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.157 Running I/O for 10 seconds... 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=72 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 72 -ge 100 ']' 00:20:47.726 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=199 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 199 -ge 100 ']' 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2381545 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2381545 ']' 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2381545 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2381545 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2381545' 00:20:47.985 killing process with pid 2381545 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2381545 00:20:47.985 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2381545 00:20:47.985 Received shutdown signal, test time was about 0.943338 seconds 00:20:47.985 00:20:47.985 Latency(us) 00:20:47.985 [2024-11-18T12:03:45.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.985 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.985 Verification LBA range: start 0x0 length 0x400 00:20:47.985 Nvme1n1 : 0.93 281.00 17.56 0.00 0.00 224785.14 2649.93 220656.86 00:20:47.985 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.985 Verification LBA range: start 0x0 length 0x400 00:20:47.985 Nvme2n1 : 0.93 278.33 17.40 0.00 0.00 222907.17 2664.18 217921.45 00:20:47.985 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.985 Verification LBA range: start 0x0 length 0x400 00:20:47.985 Nvme3n1 : 0.92 305.37 19.09 0.00 0.00 196643.79 11340.58 220656.86 00:20:47.985 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.985 Verification LBA range: start 0x0 length 0x400 00:20:47.985 Nvme4n1 : 0.92 284.54 17.78 0.00 0.00 210060.57 3704.21 221568.67 00:20:47.985 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.985 Verification LBA range: start 0x0 length 0x400 00:20:47.985 Nvme5n1 : 0.94 272.69 17.04 0.00 0.00 216018.37 17552.25 218833.25 00:20:47.985 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.985 Verification LBA range: start 0x0 length 0x400 00:20:47.985 Nvme6n1 : 0.91 286.56 17.91 0.00 0.00 200122.03 8947.09 211538.81 00:20:47.985 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.985 Verification LBA range: start 0x0 length 0x400 00:20:47.985 Nvme7n1 : 0.93 275.18 17.20 0.00 0.00 205917.27 15158.76 222480.47 00:20:47.985 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.985 Verification LBA range: start 0x0 length 0x400 00:20:47.985 Nvme8n1 : 0.94 273.31 17.08 0.00 0.00 203579.88 15272.74 220656.86 00:20:47.985 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.985 Verification LBA range: start 0x0 length 0x400 00:20:47.985 Nvme9n1 : 0.94 271.58 16.97 0.00 0.00 201092.90 18236.10 225215.89 00:20:47.985 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.985 Verification LBA range: start 0x0 length 0x400 00:20:47.985 Nvme10n1 : 0.90 213.07 13.32 0.00 0.00 249218.45 18692.01 240716.58 00:20:47.985 [2024-11-18T12:03:45.687Z] =================================================================================================================== 00:20:47.985 [2024-11-18T12:03:45.687Z] Total : 2741.62 171.35 0.00 0.00 211963.87 2649.93 240716.58 00:20:48.244 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:49.183 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2381271 00:20:49.183 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:49.183 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:49.183 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:49.183 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:49.183 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:49.183 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:49.184 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:49.184 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:49.184 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:49.184 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:49.184 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:49.184 rmmod nvme_tcp 00:20:49.184 rmmod nvme_fabrics 00:20:49.184 rmmod nvme_keyring 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2381271 ']' 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2381271 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2381271 ']' 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2381271 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2381271 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2381271' 00:20:49.443 killing process with pid 2381271 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2381271 00:20:49.443 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2381271 00:20:49.712 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:49.712 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:49.712 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:49.712 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:49.712 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:49.712 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:49.712 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:49.712 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:49.712 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:49.712 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.712 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.712 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:52.325 00:20:52.325 real 0m7.961s 00:20:52.325 user 0m24.140s 00:20:52.325 sys 0m1.436s 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.325 ************************************ 00:20:52.325 END TEST nvmf_shutdown_tc2 00:20:52.325 ************************************ 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:52.325 ************************************ 00:20:52.325 START TEST nvmf_shutdown_tc3 00:20:52.325 ************************************ 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:52.325 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.325 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:52.326 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:52.326 Found net devices under 0000:86:00.0: cvl_0_0 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:52.326 Found net devices under 0000:86:00.1: cvl_0_1 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:52.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:20:52.326 00:20:52.326 --- 10.0.0.2 ping statistics --- 00:20:52.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.326 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:20:52.326 00:20:52.326 --- 10.0.0.1 ping statistics --- 00:20:52.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.326 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2382639 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2382639 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2382639 ']' 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.326 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:52.327 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.327 [2024-11-18 13:03:49.809350] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:20:52.327 [2024-11-18 13:03:49.809411] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.327 [2024-11-18 13:03:49.888655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.327 [2024-11-18 13:03:49.931328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.327 [2024-11-18 13:03:49.931371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.327 [2024-11-18 13:03:49.931382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.327 [2024-11-18 13:03:49.931388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.327 [2024-11-18 13:03:49.931394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.327 [2024-11-18 13:03:49.933068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.327 [2024-11-18 13:03:49.933194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.327 [2024-11-18 13:03:49.933301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.327 [2024-11-18 13:03:49.933301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.613 [2024-11-18 13:03:50.082120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.613 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.614 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.614 Malloc1 00:20:52.614 [2024-11-18 13:03:50.205891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.614 Malloc2 00:20:52.614 Malloc3 00:20:52.890 Malloc4 00:20:52.890 Malloc5 00:20:52.890 Malloc6 00:20:52.890 Malloc7 00:20:52.890 Malloc8 00:20:52.890 Malloc9 00:20:52.890 Malloc10 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2382879 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2382879 /var/tmp/bdevperf.sock 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2382879 ']' 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.170 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.171 { 00:20:53.171 "params": { 00:20:53.171 "name": "Nvme$subsystem", 00:20:53.171 "trtype": "$TEST_TRANSPORT", 00:20:53.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.171 "adrfam": "ipv4", 00:20:53.171 "trsvcid": "$NVMF_PORT", 00:20:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.171 "hdgst": ${hdgst:-false}, 00:20:53.171 "ddgst": ${ddgst:-false} 00:20:53.171 }, 00:20:53.171 "method": "bdev_nvme_attach_controller" 00:20:53.171 } 00:20:53.171 EOF 00:20:53.171 )") 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.171 { 00:20:53.171 "params": { 00:20:53.171 "name": "Nvme$subsystem", 00:20:53.171 "trtype": "$TEST_TRANSPORT", 00:20:53.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.171 "adrfam": "ipv4", 00:20:53.171 "trsvcid": "$NVMF_PORT", 00:20:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.171 "hdgst": ${hdgst:-false}, 00:20:53.171 "ddgst": ${ddgst:-false} 00:20:53.171 }, 00:20:53.171 "method": "bdev_nvme_attach_controller" 00:20:53.171 } 00:20:53.171 EOF 00:20:53.171 )") 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.171 { 00:20:53.171 "params": { 00:20:53.171 "name": "Nvme$subsystem", 00:20:53.171 "trtype": "$TEST_TRANSPORT", 00:20:53.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.171 "adrfam": "ipv4", 00:20:53.171 "trsvcid": "$NVMF_PORT", 00:20:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.171 "hdgst": ${hdgst:-false}, 00:20:53.171 "ddgst": ${ddgst:-false} 00:20:53.171 }, 00:20:53.171 "method": "bdev_nvme_attach_controller" 00:20:53.171 } 00:20:53.171 EOF 00:20:53.171 )") 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.171 { 00:20:53.171 "params": { 00:20:53.171 "name": "Nvme$subsystem", 00:20:53.171 "trtype": "$TEST_TRANSPORT", 00:20:53.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.171 "adrfam": "ipv4", 00:20:53.171 "trsvcid": "$NVMF_PORT", 00:20:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.171 "hdgst": ${hdgst:-false}, 00:20:53.171 "ddgst": ${ddgst:-false} 00:20:53.171 }, 00:20:53.171 "method": "bdev_nvme_attach_controller" 00:20:53.171 } 00:20:53.171 EOF 00:20:53.171 )") 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.171 { 00:20:53.171 "params": { 00:20:53.171 "name": "Nvme$subsystem", 00:20:53.171 "trtype": "$TEST_TRANSPORT", 00:20:53.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.171 "adrfam": "ipv4", 00:20:53.171 "trsvcid": "$NVMF_PORT", 00:20:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.171 "hdgst": ${hdgst:-false}, 00:20:53.171 "ddgst": ${ddgst:-false} 00:20:53.171 }, 00:20:53.171 "method": "bdev_nvme_attach_controller" 00:20:53.171 } 00:20:53.171 EOF 00:20:53.171 )") 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.171 { 00:20:53.171 "params": { 00:20:53.171 "name": "Nvme$subsystem", 00:20:53.171 "trtype": "$TEST_TRANSPORT", 00:20:53.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.171 "adrfam": "ipv4", 00:20:53.171 "trsvcid": "$NVMF_PORT", 00:20:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.171 "hdgst": ${hdgst:-false}, 00:20:53.171 "ddgst": ${ddgst:-false} 00:20:53.171 }, 00:20:53.171 "method": "bdev_nvme_attach_controller" 00:20:53.171 } 00:20:53.171 EOF 00:20:53.171 )") 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.171 { 00:20:53.171 "params": { 00:20:53.171 "name": "Nvme$subsystem", 00:20:53.171 "trtype": "$TEST_TRANSPORT", 00:20:53.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.171 "adrfam": "ipv4", 00:20:53.171 "trsvcid": "$NVMF_PORT", 00:20:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.171 "hdgst": ${hdgst:-false}, 00:20:53.171 "ddgst": ${ddgst:-false} 00:20:53.171 }, 00:20:53.171 "method": "bdev_nvme_attach_controller" 00:20:53.171 } 00:20:53.171 EOF 00:20:53.171 )") 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.171 [2024-11-18 13:03:50.688710] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:20:53.171 [2024-11-18 13:03:50.688756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382879 ] 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.171 { 00:20:53.171 "params": { 00:20:53.171 "name": "Nvme$subsystem", 00:20:53.171 "trtype": "$TEST_TRANSPORT", 00:20:53.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.171 "adrfam": "ipv4", 00:20:53.171 "trsvcid": "$NVMF_PORT", 00:20:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.171 "hdgst": ${hdgst:-false}, 00:20:53.171 "ddgst": ${ddgst:-false} 00:20:53.171 }, 00:20:53.171 "method": "bdev_nvme_attach_controller" 00:20:53.171 } 00:20:53.171 EOF 00:20:53.171 )") 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.171 { 00:20:53.171 "params": { 00:20:53.171 "name": "Nvme$subsystem", 00:20:53.171 "trtype": "$TEST_TRANSPORT", 00:20:53.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.171 "adrfam": "ipv4", 00:20:53.171 "trsvcid": "$NVMF_PORT", 00:20:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.171 "hdgst": ${hdgst:-false}, 00:20:53.171 "ddgst": ${ddgst:-false} 00:20:53.171 }, 00:20:53.171 "method": "bdev_nvme_attach_controller" 00:20:53.171 } 00:20:53.171 EOF 00:20:53.171 )") 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.171 { 00:20:53.171 "params": { 00:20:53.171 "name": "Nvme$subsystem", 00:20:53.171 "trtype": "$TEST_TRANSPORT", 00:20:53.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.171 "adrfam": "ipv4", 00:20:53.171 "trsvcid": "$NVMF_PORT", 00:20:53.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.171 "hdgst": ${hdgst:-false}, 00:20:53.171 "ddgst": ${ddgst:-false} 00:20:53.171 }, 00:20:53.171 "method": "bdev_nvme_attach_controller" 00:20:53.171 } 00:20:53.171 EOF 00:20:53.171 )") 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:53.171 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:53.171 "params": { 00:20:53.172 "name": "Nvme1", 00:20:53.172 "trtype": "tcp", 00:20:53.172 "traddr": "10.0.0.2", 00:20:53.172 "adrfam": "ipv4", 00:20:53.172 "trsvcid": "4420", 00:20:53.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.172 "hdgst": false, 00:20:53.172 "ddgst": false 00:20:53.172 }, 00:20:53.172 "method": "bdev_nvme_attach_controller" 00:20:53.172 },{ 00:20:53.172 "params": { 00:20:53.172 "name": "Nvme2", 00:20:53.172 "trtype": "tcp", 00:20:53.172 "traddr": "10.0.0.2", 00:20:53.172 "adrfam": "ipv4", 00:20:53.172 "trsvcid": "4420", 00:20:53.172 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:53.172 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:53.172 "hdgst": false, 00:20:53.172 "ddgst": false 00:20:53.172 }, 00:20:53.172 "method": "bdev_nvme_attach_controller" 00:20:53.172 },{ 00:20:53.172 "params": { 00:20:53.172 "name": "Nvme3", 00:20:53.172 "trtype": "tcp", 00:20:53.172 "traddr": "10.0.0.2", 00:20:53.172 "adrfam": "ipv4", 00:20:53.172 "trsvcid": "4420", 00:20:53.172 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:53.172 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:53.172 "hdgst": false, 00:20:53.172 "ddgst": false 00:20:53.172 }, 00:20:53.172 "method": "bdev_nvme_attach_controller" 00:20:53.172 },{ 00:20:53.172 "params": { 00:20:53.172 "name": "Nvme4", 00:20:53.172 "trtype": "tcp", 00:20:53.172 "traddr": "10.0.0.2", 00:20:53.172 "adrfam": "ipv4", 00:20:53.172 "trsvcid": "4420", 00:20:53.172 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:53.172 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:53.172 "hdgst": false, 00:20:53.172 "ddgst": false 00:20:53.172 }, 00:20:53.172 "method": "bdev_nvme_attach_controller" 00:20:53.172 },{ 00:20:53.172 "params": { 00:20:53.172 "name": "Nvme5", 00:20:53.172 "trtype": "tcp", 00:20:53.172 "traddr": "10.0.0.2", 00:20:53.172 "adrfam": "ipv4", 00:20:53.172 "trsvcid": "4420", 00:20:53.172 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:53.172 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:53.172 "hdgst": false, 00:20:53.172 "ddgst": false 00:20:53.172 }, 00:20:53.172 "method": "bdev_nvme_attach_controller" 00:20:53.172 },{ 00:20:53.172 "params": { 00:20:53.172 "name": "Nvme6", 00:20:53.172 "trtype": "tcp", 00:20:53.172 "traddr": "10.0.0.2", 00:20:53.172 "adrfam": "ipv4", 00:20:53.172 "trsvcid": "4420", 00:20:53.172 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:53.172 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:53.172 "hdgst": false, 00:20:53.172 "ddgst": false 00:20:53.172 }, 00:20:53.172 "method": "bdev_nvme_attach_controller" 00:20:53.172 },{ 00:20:53.172 "params": { 00:20:53.172 "name": "Nvme7", 00:20:53.172 "trtype": "tcp", 00:20:53.172 "traddr": "10.0.0.2", 00:20:53.172 "adrfam": "ipv4", 00:20:53.172 "trsvcid": "4420", 00:20:53.172 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:53.172 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:53.172 "hdgst": false, 00:20:53.172 "ddgst": false 00:20:53.172 }, 00:20:53.172 "method": "bdev_nvme_attach_controller" 00:20:53.172 },{ 00:20:53.172 "params": { 00:20:53.172 "name": "Nvme8", 00:20:53.172 "trtype": "tcp", 00:20:53.172 "traddr": "10.0.0.2", 00:20:53.172 "adrfam": "ipv4", 00:20:53.172 "trsvcid": "4420", 00:20:53.172 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:53.172 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:53.172 "hdgst": false, 00:20:53.172 "ddgst": false 00:20:53.172 }, 00:20:53.172 "method": "bdev_nvme_attach_controller" 00:20:53.172 },{ 00:20:53.172 "params": { 00:20:53.172 "name": "Nvme9", 00:20:53.172 "trtype": "tcp", 00:20:53.172 "traddr": "10.0.0.2", 00:20:53.172 "adrfam": "ipv4", 00:20:53.172 "trsvcid": "4420", 00:20:53.172 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:53.172 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:53.172 "hdgst": false, 00:20:53.172 "ddgst": false 00:20:53.172 }, 00:20:53.172 "method": "bdev_nvme_attach_controller" 00:20:53.172 },{ 00:20:53.172 "params": { 00:20:53.172 "name": "Nvme10", 00:20:53.172 "trtype": "tcp", 00:20:53.172 "traddr": "10.0.0.2", 00:20:53.172 "adrfam": "ipv4", 00:20:53.172 "trsvcid": "4420", 00:20:53.172 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:53.172 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:53.172 "hdgst": false, 00:20:53.172 "ddgst": false 00:20:53.172 }, 00:20:53.172 "method": "bdev_nvme_attach_controller" 00:20:53.172 }' 00:20:53.172 [2024-11-18 13:03:50.767174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.172 [2024-11-18 13:03:50.808384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.198 Running I/O for 10 seconds... 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:55.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:55.199 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.199 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.199 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.199 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=22 00:20:55.199 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 22 -ge 100 ']' 00:20:55.199 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:55.483 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:55.483 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:55.483 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:55.483 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:55.483 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.483 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.483 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.483 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:55.483 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:55.483 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:55.483 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:55.483 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:55.483 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2382639 00:20:55.483 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2382639 ']' 00:20:55.483 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2382639 00:20:55.483 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:20:55.483 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:55.483 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2382639 00:20:55.483 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:55.483 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:55.483 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2382639' 00:20:55.483 killing process with pid 2382639 00:20:55.483 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 2382639 00:20:55.483 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 2382639 00:20:55.483 [2024-11-18 13:03:53.064273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.483 [2024-11-18 13:03:53.064521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.064732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995170 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.065963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.065995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.484 [2024-11-18 13:03:53.066326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.066332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.066339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.066345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.066355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.066361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.066368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.066374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.066380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.066387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.066393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.066399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.066406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b079e0 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.067939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995640 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.069022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995b10 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.069047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995b10 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.069055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995b10 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.069062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995b10 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.069069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995b10 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.069076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995b10 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.069082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995b10 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.069088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995b10 is same with the state(6) to be set 00:20:55.485 [2024-11-18 13:03:53.069094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995b10 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.069999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996000 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.070999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.071008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.071016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.071022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.071028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.071035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.071041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.071048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.071056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.071062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.071068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.071075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.071082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.486 [2024-11-18 13:03:53.071088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.071312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996380 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.487 [2024-11-18 13:03:53.072961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.072967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.072973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.072979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.072986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.072992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.072998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996d20 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.073995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.074189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997210 is same with the state(6) to be set 00:20:55.488 [2024-11-18 13:03:53.075067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.489 [2024-11-18 13:03:53.075697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.489 [2024-11-18 13:03:53.075703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.075990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.075996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.076004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.076010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.076018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.076026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.076033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.076040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.076048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.490 [2024-11-18 13:03:53.076054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.076078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:55.490 [2024-11-18 13:03:53.076236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.490 [2024-11-18 13:03:53.076250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.076258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.490 [2024-11-18 13:03:53.076265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.076273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.490 [2024-11-18 13:03:53.076280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.076287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.490 [2024-11-18 13:03:53.076293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.076300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2518970 is same with the state(6) to be set 00:20:55.490 [2024-11-18 13:03:53.076326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.490 [2024-11-18 13:03:53.076334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.076342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.490 [2024-11-18 13:03:53.076348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.076361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.490 [2024-11-18 13:03:53.076368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.076375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.490 [2024-11-18 13:03:53.076382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.076389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25194c0 is same with the state(6) to be set 00:20:55.490 [2024-11-18 13:03:53.076416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.490 [2024-11-18 13:03:53.076427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.490 [2024-11-18 13:03:53.076435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.490 [2024-11-18 13:03:53.076442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242f610 is same with the state(6) to be set 00:20:55.491 [2024-11-18 13:03:53.076500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x294c320 is same with the state(6) to be set 00:20:55.491 [2024-11-18 13:03:53.076583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ad50 is same with the state(6) to be set 00:20:55.491 [2024-11-18 13:03:53.076661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251b1b0 is same with the state(6) to be set 00:20:55.491 [2024-11-18 13:03:53.076739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2946da0 is same with the state(6) to be set 00:20:55.491 [2024-11-18 13:03:53.076818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29836b0 is same with the state(6) to be set 00:20:55.491 [2024-11-18 13:03:53.076895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2990b60 is same with the state(6) to be set 00:20:55.491 [2024-11-18 13:03:53.076976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.076992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.076999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.077006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.077012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.077019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.491 [2024-11-18 13:03:53.077026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.077032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29834a0 is same with the state(6) to be set 00:20:55.491 [2024-11-18 13:03:53.077481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.491 [2024-11-18 13:03:53.077501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.077512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.491 [2024-11-18 13:03:53.077519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.491 [2024-11-18 13:03:53.077528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.077930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.077938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.091217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.091243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.091252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.091264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.091273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.091285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.091294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.091306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.091315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.091326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.091335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.091348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.091362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.091374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.091383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.091395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.091404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.091415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.091426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.091437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.492 [2024-11-18 13:03:53.091446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.492 [2024-11-18 13:03:53.091458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.091945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.091956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29203e0 is same with the state(6) to be set 00:20:55.493 [2024-11-18 13:03:53.093528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2518970 (9): Bad file descriptor 00:20:55.493 [2024-11-18 13:03:53.093571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25194c0 (9): Bad file descriptor 00:20:55.493 [2024-11-18 13:03:53.093600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242f610 (9): Bad file descriptor 00:20:55.493 [2024-11-18 13:03:53.093620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x294c320 (9): Bad file descriptor 00:20:55.493 [2024-11-18 13:03:53.093637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251ad50 (9): Bad file descriptor 00:20:55.493 [2024-11-18 13:03:53.093657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251b1b0 (9): Bad file descriptor 00:20:55.493 [2024-11-18 13:03:53.093675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2946da0 (9): Bad file descriptor 00:20:55.493 [2024-11-18 13:03:53.093696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29836b0 (9): Bad file descriptor 00:20:55.493 [2024-11-18 13:03:53.093713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2990b60 (9): Bad file descriptor 00:20:55.493 [2024-11-18 13:03:53.093728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29834a0 (9): Bad file descriptor 00:20:55.493 [2024-11-18 13:03:53.093893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.093909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.093925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.093935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.093947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.093957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.093968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.093977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.093988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.093998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.094009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.094018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.094029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.094038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.094049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.094058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.094069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.094082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.094094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.493 [2024-11-18 13:03:53.094104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.493 [2024-11-18 13:03:53.094115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.494 [2024-11-18 13:03:53.094894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.494 [2024-11-18 13:03:53.094903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.094914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.094923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.094934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.094943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.094954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.094963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.094974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.094983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.094994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.495 [2024-11-18 13:03:53.095715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.495 [2024-11-18 13:03:53.095724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.095735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.095744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.095757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.095766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.095777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.095785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.095797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.095806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.095817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.095826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.095837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.095846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.095857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.095866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.095876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.095885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.095896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.095905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.095916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.095928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.095939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.095948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.095959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.095968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.095979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.095988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.095999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.496 [2024-11-18 13:03:53.096505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.496 [2024-11-18 13:03:53.096516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.096528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.096539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.096548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.096559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.096568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.096580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.096589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.096600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.096609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.096620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.096629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.096640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.096649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.098370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:55.497 [2024-11-18 13:03:53.101592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:55.497 [2024-11-18 13:03:53.101626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:55.497 [2024-11-18 13:03:53.101819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.497 [2024-11-18 13:03:53.101838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29836b0 with addr=10.0.0.2, port=4420 00:20:55.497 [2024-11-18 13:03:53.101850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29836b0 is same with the state(6) to be set 00:20:55.497 [2024-11-18 13:03:53.102381] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:55.497 [2024-11-18 13:03:53.102505] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:55.497 [2024-11-18 13:03:53.102526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:55.497 [2024-11-18 13:03:53.102650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.497 [2024-11-18 13:03:53.102667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x294c320 with addr=10.0.0.2, port=4420 00:20:55.497 [2024-11-18 13:03:53.102678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x294c320 is same with the state(6) to be set 00:20:55.497 [2024-11-18 13:03:53.102762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.497 [2024-11-18 13:03:53.102776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2518970 with addr=10.0.0.2, port=4420 00:20:55.497 [2024-11-18 13:03:53.102786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2518970 is same with the state(6) to be set 00:20:55.497 [2024-11-18 13:03:53.102804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29836b0 (9): Bad file descriptor 00:20:55.497 [2024-11-18 13:03:53.102869] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:55.497 [2024-11-18 13:03:53.102923] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:55.497 [2024-11-18 13:03:53.103558] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:55.497 [2024-11-18 13:03:53.103600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.103988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.103997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.104008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.104017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.104029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.104037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.104049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.104058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.104069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.104078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.104089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.104100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.497 [2024-11-18 13:03:53.104111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.497 [2024-11-18 13:03:53.104120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.498 [2024-11-18 13:03:53.104824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.498 [2024-11-18 13:03:53.104834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2921920 is same with the state(6) to be set 00:20:55.498 [2024-11-18 13:03:53.104962] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:55.498 [2024-11-18 13:03:53.105161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.498 [2024-11-18 13:03:53.105176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2946da0 with addr=10.0.0.2, port=4420 00:20:55.498 [2024-11-18 13:03:53.105186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2946da0 is same with the state(6) to be set 00:20:55.498 [2024-11-18 13:03:53.105203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x294c320 (9): Bad file descriptor 00:20:55.498 [2024-11-18 13:03:53.105215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2518970 (9): Bad file descriptor 00:20:55.498 [2024-11-18 13:03:53.105225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:55.498 [2024-11-18 13:03:53.105234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:55.499 [2024-11-18 13:03:53.105245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:55.499 [2024-11-18 13:03:53.105256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:55.499 [2024-11-18 13:03:53.105280] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:20:55.499 [2024-11-18 13:03:53.106668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:55.499 [2024-11-18 13:03:53.106699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2946da0 (9): Bad file descriptor 00:20:55.499 [2024-11-18 13:03:53.106712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:55.499 [2024-11-18 13:03:53.106721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:55.499 [2024-11-18 13:03:53.106730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:55.499 [2024-11-18 13:03:53.106739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:55.499 [2024-11-18 13:03:53.106749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:55.499 [2024-11-18 13:03:53.106757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:55.499 [2024-11-18 13:03:53.106766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:55.499 [2024-11-18 13:03:53.106774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:55.499 [2024-11-18 13:03:53.106837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.106849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.106864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.106873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.106885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.106894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.106906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.106915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.106926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.106935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.106947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.106959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.106971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.106980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.106991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.499 [2024-11-18 13:03:53.107516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.499 [2024-11-18 13:03:53.107527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.107983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.107995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.108004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.108015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.108024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.108035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.108044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.108055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.108064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.108076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.108084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.108095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.108104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.108115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.108124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.108135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.108144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.108154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271f460 is same with the state(6) to be set 00:20:55.500 [2024-11-18 13:03:53.109440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.109452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.109464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.109470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.109480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.109486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.500 [2024-11-18 13:03:53.109495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.500 [2024-11-18 13:03:53.109504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.109991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.109998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.110006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.110012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.110021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.110028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.110036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.110042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.110050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.110057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.110066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.110073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.110081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.501 [2024-11-18 13:03:53.110088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.501 [2024-11-18 13:03:53.110096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.110404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.110412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27205c0 is same with the state(6) to be set 00:20:55.502 [2024-11-18 13:03:53.111430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.502 [2024-11-18 13:03:53.111697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.502 [2024-11-18 13:03:53.111704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.111992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.111998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.503 [2024-11-18 13:03:53.112278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.503 [2024-11-18 13:03:53.112286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.112292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.112300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.112306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.112315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.112321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.112329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.112336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.112344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.112351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.112363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.112369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.112377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.112384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.112391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x291ef60 is same with the state(6) to be set 00:20:55.504 [2024-11-18 13:03:53.113413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.504 [2024-11-18 13:03:53.113813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.504 [2024-11-18 13:03:53.113820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.113828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.113834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.113843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.113849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.113857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.113864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.113872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.113879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.113887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.113894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.113902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.113908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.113916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.113923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.113931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.113937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.113945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.113952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.113960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.113966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.113975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.113982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.113991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.113997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.114373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.114380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2922e60 is same with the state(6) to be set 00:20:55.505 [2024-11-18 13:03:53.115382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.505 [2024-11-18 13:03:53.115394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.505 [2024-11-18 13:03:53.115404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.506 [2024-11-18 13:03:53.115976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.506 [2024-11-18 13:03:53.115984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.115990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.115999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.507 [2024-11-18 13:03:53.116333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.507 [2024-11-18 13:03:53.116340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a6270 is same with the state(6) to be set 00:20:55.507 [2024-11-18 13:03:53.117312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:55.507 [2024-11-18 13:03:53.117326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:55.507 [2024-11-18 13:03:53.117336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:55.507 [2024-11-18 13:03:53.117346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:55.507 [2024-11-18 13:03:53.117625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.507 [2024-11-18 13:03:53.117640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242f610 with addr=10.0.0.2, port=4420 00:20:55.507 [2024-11-18 13:03:53.117648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242f610 is same with the state(6) to be set 00:20:55.507 [2024-11-18 13:03:53.117655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:55.507 [2024-11-18 13:03:53.117661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:55.507 [2024-11-18 13:03:53.117669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:55.507 [2024-11-18 13:03:53.117676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:55.507 [2024-11-18 13:03:53.117721] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:55.507 [2024-11-18 13:03:53.117735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242f610 (9): Bad file descriptor 00:20:55.507 task offset: 16384 on job bdev=Nvme9n1 fails 00:20:55.507 00:20:55.507 Latency(us) 00:20:55.507 [2024-11-18T12:03:53.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.507 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.507 Job: Nvme1n1 ended in about 0.68 seconds with error 00:20:55.507 Verification LBA range: start 0x0 length 0x400 00:20:55.507 Nvme1n1 : 0.68 189.08 11.82 94.54 0.00 222237.68 17666.23 219745.06 00:20:55.507 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.507 Job: Nvme2n1 ended in about 0.68 seconds with error 00:20:55.507 Verification LBA range: start 0x0 length 0x400 00:20:55.507 Nvme2n1 : 0.68 188.48 11.78 94.24 0.00 216998.07 18805.98 218833.25 00:20:55.507 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.507 Job: Nvme3n1 ended in about 0.67 seconds with error 00:20:55.507 Verification LBA range: start 0x0 length 0x400 00:20:55.507 Nvme3n1 : 0.67 191.80 11.99 95.90 0.00 207065.56 15500.69 198773.54 00:20:55.507 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.507 Job: Nvme4n1 ended in about 0.67 seconds with error 00:20:55.507 Verification LBA range: start 0x0 length 0x400 00:20:55.507 Nvme4n1 : 0.67 191.44 11.97 95.72 0.00 201474.82 22681.15 196949.93 00:20:55.507 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.507 Job: Nvme5n1 ended in about 0.68 seconds with error 00:20:55.507 Verification LBA range: start 0x0 length 0x400 00:20:55.507 Nvme5n1 : 0.68 93.96 5.87 93.96 0.00 299679.39 18919.96 240716.58 00:20:55.507 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.507 Job: Nvme6n1 ended in about 0.67 seconds with error 00:20:55.507 Verification LBA range: start 0x0 length 0x400 00:20:55.507 Nvme6n1 : 0.67 192.31 12.02 96.16 0.00 188526.71 18122.13 217009.64 00:20:55.507 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.507 Job: Nvme7n1 ended in about 0.67 seconds with error 00:20:55.507 Verification LBA range: start 0x0 length 0x400 00:20:55.507 Nvme7n1 : 0.67 197.28 12.33 87.51 0.00 185222.75 7465.41 201508.95 00:20:55.507 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.507 Job: Nvme8n1 ended in about 0.68 seconds with error 00:20:55.507 Verification LBA range: start 0x0 length 0x400 00:20:55.508 Nvme8n1 : 0.68 187.39 11.71 93.69 0.00 182423.67 15500.69 189655.49 00:20:55.508 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.508 Job: Nvme9n1 ended in about 0.66 seconds with error 00:20:55.508 Verification LBA range: start 0x0 length 0x400 00:20:55.508 Nvme9n1 : 0.66 193.65 12.10 96.83 0.00 169036.58 31001.38 215186.03 00:20:55.508 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:55.508 Job: Nvme10n1 ended in about 0.69 seconds with error 00:20:55.508 Verification LBA range: start 0x0 length 0x400 00:20:55.508 Nvme10n1 : 0.69 93.42 5.84 93.42 0.00 256777.79 19033.93 240716.58 00:20:55.508 [2024-11-18T12:03:53.210Z] =================================================================================================================== 00:20:55.508 [2024-11-18T12:03:53.210Z] Total : 1718.81 107.43 941.97 0.00 208281.14 7465.41 240716.58 00:20:55.508 [2024-11-18 13:03:53.147336] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:55.508 [2024-11-18 13:03:53.147394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:55.508 [2024-11-18 13:03:53.147728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.508 [2024-11-18 13:03:53.147746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251b1b0 with addr=10.0.0.2, port=4420 00:20:55.508 [2024-11-18 13:03:53.147757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251b1b0 is same with the state(6) to be set 00:20:55.508 [2024-11-18 13:03:53.147976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.508 [2024-11-18 13:03:53.147987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251ad50 with addr=10.0.0.2, port=4420 00:20:55.508 [2024-11-18 13:03:53.147995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251ad50 is same with the state(6) to be set 00:20:55.508 [2024-11-18 13:03:53.148115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.508 [2024-11-18 13:03:53.148125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25194c0 with addr=10.0.0.2, port=4420 00:20:55.508 [2024-11-18 13:03:53.148133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25194c0 is same with the state(6) to be set 00:20:55.508 [2024-11-18 13:03:53.148275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.508 [2024-11-18 13:03:53.148285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29834a0 with addr=10.0.0.2, port=4420 00:20:55.508 [2024-11-18 13:03:53.148299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29834a0 is same with the state(6) to be set 00:20:55.508 [2024-11-18 13:03:53.149474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:55.508 [2024-11-18 13:03:53.149491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:55.508 [2024-11-18 13:03:53.149499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:55.508 [2024-11-18 13:03:53.149509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:55.508 [2024-11-18 13:03:53.149734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.508 [2024-11-18 13:03:53.149749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2990b60 with addr=10.0.0.2, port=4420 00:20:55.508 [2024-11-18 13:03:53.149757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2990b60 is same with the state(6) to be set 00:20:55.508 [2024-11-18 13:03:53.149769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251b1b0 (9): Bad file descriptor 00:20:55.508 [2024-11-18 13:03:53.149780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251ad50 (9): Bad file descriptor 00:20:55.508 [2024-11-18 13:03:53.149789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25194c0 (9): Bad file descriptor 00:20:55.508 [2024-11-18 13:03:53.149797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29834a0 (9): Bad file descriptor 00:20:55.508 [2024-11-18 13:03:53.149805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:55.508 [2024-11-18 13:03:53.149812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:55.508 [2024-11-18 13:03:53.149820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:55.508 [2024-11-18 13:03:53.149830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:55.508 [2024-11-18 13:03:53.149866] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:20:55.508 [2024-11-18 13:03:53.149877] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:20:55.508 [2024-11-18 13:03:53.149888] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:20:55.508 [2024-11-18 13:03:53.149897] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:20:55.508 [2024-11-18 13:03:53.150127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.508 [2024-11-18 13:03:53.150142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29836b0 with addr=10.0.0.2, port=4420 00:20:55.508 [2024-11-18 13:03:53.150149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29836b0 is same with the state(6) to be set 00:20:55.508 [2024-11-18 13:03:53.150370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.508 [2024-11-18 13:03:53.150381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2518970 with addr=10.0.0.2, port=4420 00:20:55.508 [2024-11-18 13:03:53.150389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2518970 is same with the state(6) to be set 00:20:55.508 [2024-11-18 13:03:53.150533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.508 [2024-11-18 13:03:53.150543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x294c320 with addr=10.0.0.2, port=4420 00:20:55.508 [2024-11-18 13:03:53.150550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x294c320 is same with the state(6) to be set 00:20:55.508 [2024-11-18 13:03:53.150676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.508 [2024-11-18 13:03:53.150686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2946da0 with addr=10.0.0.2, port=4420 00:20:55.508 [2024-11-18 13:03:53.150694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2946da0 is same with the state(6) to be set 00:20:55.508 [2024-11-18 13:03:53.150703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2990b60 (9): Bad file descriptor 00:20:55.508 [2024-11-18 13:03:53.150712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:55.508 [2024-11-18 13:03:53.150718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:55.508 [2024-11-18 13:03:53.150725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:55.508 [2024-11-18 13:03:53.150733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:55.508 [2024-11-18 13:03:53.150741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:55.508 [2024-11-18 13:03:53.150747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:55.508 [2024-11-18 13:03:53.150753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:55.508 [2024-11-18 13:03:53.150759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:55.508 [2024-11-18 13:03:53.150766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:55.508 [2024-11-18 13:03:53.150773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:55.508 [2024-11-18 13:03:53.150779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:55.508 [2024-11-18 13:03:53.150784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:55.508 [2024-11-18 13:03:53.150791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:55.508 [2024-11-18 13:03:53.150797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:55.508 [2024-11-18 13:03:53.150803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:55.508 [2024-11-18 13:03:53.150809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:55.508 [2024-11-18 13:03:53.150871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:55.508 [2024-11-18 13:03:53.150891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29836b0 (9): Bad file descriptor 00:20:55.508 [2024-11-18 13:03:53.150900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2518970 (9): Bad file descriptor 00:20:55.508 [2024-11-18 13:03:53.150909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x294c320 (9): Bad file descriptor 00:20:55.508 [2024-11-18 13:03:53.150917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2946da0 (9): Bad file descriptor 00:20:55.508 [2024-11-18 13:03:53.150925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:55.508 [2024-11-18 13:03:53.150931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:55.508 [2024-11-18 13:03:53.150937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:55.508 [2024-11-18 13:03:53.150943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:55.508 [2024-11-18 13:03:53.151069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.508 [2024-11-18 13:03:53.151081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242f610 with addr=10.0.0.2, port=4420 00:20:55.508 [2024-11-18 13:03:53.151088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242f610 is same with the state(6) to be set 00:20:55.508 [2024-11-18 13:03:53.151095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:55.508 [2024-11-18 13:03:53.151101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:55.508 [2024-11-18 13:03:53.151107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:55.508 [2024-11-18 13:03:53.151114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:55.508 [2024-11-18 13:03:53.151121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:55.508 [2024-11-18 13:03:53.151127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:55.509 [2024-11-18 13:03:53.151134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:55.509 [2024-11-18 13:03:53.151140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:55.509 [2024-11-18 13:03:53.151147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:55.509 [2024-11-18 13:03:53.151152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:55.509 [2024-11-18 13:03:53.151158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:55.509 [2024-11-18 13:03:53.151164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:55.509 [2024-11-18 13:03:53.151171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:55.509 [2024-11-18 13:03:53.151177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:55.509 [2024-11-18 13:03:53.151183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:55.509 [2024-11-18 13:03:53.151189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:55.509 [2024-11-18 13:03:53.151215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242f610 (9): Bad file descriptor 00:20:55.509 [2024-11-18 13:03:53.151239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:55.509 [2024-11-18 13:03:53.151246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:55.509 [2024-11-18 13:03:53.151254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:55.509 [2024-11-18 13:03:53.151259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:55.769 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2382879 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2382879 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 2382879 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:57.151 rmmod nvme_tcp 00:20:57.151 rmmod nvme_fabrics 00:20:57.151 rmmod nvme_keyring 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:57.151 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2382639 ']' 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2382639 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2382639 ']' 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2382639 00:20:57.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2382639) - No such process 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2382639 is not found' 00:20:57.152 Process with pid 2382639 is not found 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.152 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:59.061 00:20:59.061 real 0m7.179s 00:20:59.061 user 0m16.563s 00:20:59.061 sys 0m1.278s 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.061 ************************************ 00:20:59.061 END TEST nvmf_shutdown_tc3 00:20:59.061 ************************************ 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:59.061 ************************************ 00:20:59.061 START TEST nvmf_shutdown_tc4 00:20:59.061 ************************************ 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:59.061 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:59.061 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:59.061 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:59.062 Found net devices under 0000:86:00.0: cvl_0_0 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:59.062 Found net devices under 0000:86:00.1: cvl_0_1 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:59.062 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:59.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:20:59.321 00:20:59.321 --- 10.0.0.2 ping statistics --- 00:20:59.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.321 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:20:59.321 00:20:59.321 --- 10.0.0.1 ping statistics --- 00:20:59.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.321 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:59.321 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:59.321 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:59.321 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:59.321 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:59.321 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:59.581 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2384027 00:20:59.581 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2384027 00:20:59.581 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:59.581 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 2384027 ']' 00:20:59.581 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.581 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:59.581 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.581 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:59.581 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:59.581 [2024-11-18 13:03:57.084553] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:20:59.581 [2024-11-18 13:03:57.084600] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.581 [2024-11-18 13:03:57.146601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:59.581 [2024-11-18 13:03:57.189327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.581 [2024-11-18 13:03:57.189382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.581 [2024-11-18 13:03:57.189390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.581 [2024-11-18 13:03:57.189396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.581 [2024-11-18 13:03:57.189401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.581 [2024-11-18 13:03:57.190914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.581 [2024-11-18 13:03:57.191021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.581 [2024-11-18 13:03:57.191127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.581 [2024-11-18 13:03:57.191128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:59.841 [2024-11-18 13:03:57.327250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.841 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:59.841 Malloc1 00:20:59.841 [2024-11-18 13:03:57.428599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.841 Malloc2 00:20:59.841 Malloc3 00:20:59.841 Malloc4 00:21:00.101 Malloc5 00:21:00.101 Malloc6 00:21:00.101 Malloc7 00:21:00.101 Malloc8 00:21:00.101 Malloc9 00:21:00.101 Malloc10 00:21:00.360 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.360 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:00.360 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:00.360 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:00.360 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2384209 00:21:00.360 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:00.360 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:00.360 [2024-11-18 13:03:57.937409] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:05.641 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:05.641 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2384027 00:21:05.641 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2384027 ']' 00:21:05.641 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2384027 00:21:05.641 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:21:05.641 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:05.641 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2384027 00:21:05.641 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:05.641 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:05.641 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2384027' 00:21:05.641 killing process with pid 2384027 00:21:05.641 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 2384027 00:21:05.641 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 2384027 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 [2024-11-18 13:04:02.933183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.641 starting I/O failed: -6 00:21:05.641 starting I/O failed: -6 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.641 Write completed with error (sct=0, sc=8) 00:21:05.641 starting I/O failed: -6 00:21:05.642 [2024-11-18 13:04:02.934181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 [2024-11-18 13:04:02.935178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 [2024-11-18 13:04:02.935369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880a50 is same with the state(6) to be set 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 [2024-11-18 13:04:02.935413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880a50 is same with the state(6) to be set 00:21:05.642 [2024-11-18 13:04:02.935421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880a50 is same with the state(6) to be set 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 [2024-11-18 13:04:02.935428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880a50 is same with starting I/O failed: -6 00:21:05.642 the state(6) to be set 00:21:05.642 [2024-11-18 13:04:02.935437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880a50 is same with the state(6) to be set 00:21:05.642 [2024-11-18 13:04:02.935443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880a50 is same with the state(6) to be set 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 [2024-11-18 13:04:02.935450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880a50 is same with the state(6) to be set 00:21:05.642 starting I/O failed: -6 00:21:05.642 [2024-11-18 13:04:02.935457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880a50 is same with the state(6) to be set 00:21:05.642 [2024-11-18 13:04:02.935464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880a50 is same with the state(6) to be set 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 [2024-11-18 13:04:02.935878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed060 is same with the state(6) to be set 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 [2024-11-18 13:04:02.935909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed060 is same with the state(6) to be set 00:21:05.642 starting I/O failed: -6 00:21:05.642 [2024-11-18 13:04:02.935917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed060 is same with the state(6) to be set 00:21:05.642 [2024-11-18 13:04:02.935924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed060 is same with the state(6) to be set 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 [2024-11-18 13:04:02.935931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed060 is same with the state(6) to be set 00:21:05.642 starting I/O failed: -6 00:21:05.642 [2024-11-18 13:04:02.935942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed060 is same with the state(6) to be set 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.642 Write completed with error (sct=0, sc=8) 00:21:05.642 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 [2024-11-18 13:04:02.936242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed530 is same with starting I/O failed: -6 00:21:05.643 the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.936266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed530 is same with the state(6) to be set 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 [2024-11-18 13:04:02.936274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed530 is same with the state(6) to be set 00:21:05.643 starting I/O failed: -6 00:21:05.643 [2024-11-18 13:04:02.936281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed530 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.936288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed530 is same with the state(6) to be set 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 [2024-11-18 13:04:02.936294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed530 is same with the state(6) to be set 00:21:05.643 starting I/O failed: -6 00:21:05.643 [2024-11-18 13:04:02.936302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed530 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.936309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed530 is same with the state(6) to be set 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 [2024-11-18 13:04:02.936603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880580 is same with the state(6) to be set 00:21:05.643 starting I/O failed: -6 00:21:05.643 [2024-11-18 13:04:02.936626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880580 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.936634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880580 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.936640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880580 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.936646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880580 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.936653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880580 is same with the state(6) to be set 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 [2024-11-18 13:04:02.936788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:05.643 NVMe io qpair process completion error 00:21:05.643 [2024-11-18 13:04:02.937459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fbe0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.937479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fbe0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.937486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fbe0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.937492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fbe0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.937498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fbe0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.937504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fbe0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.937511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fbe0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.937518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fbe0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.937525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fbe0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.937531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fbe0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.937994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18800b0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18800b0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18800b0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18800b0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18800b0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18800b0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18800b0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18800b0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18800b0 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f240 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f240 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f240 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f240 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f240 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f240 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f240 is same with the state(6) to be set 00:21:05.643 [2024-11-18 13:04:02.938412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f240 is same with the state(6) to be set 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 [2024-11-18 13:04:02.942700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.643 starting I/O failed: -6 00:21:05.643 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 [2024-11-18 13:04:02.943367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef210 is same with the state(6) to be set 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 [2024-11-18 13:04:02.943390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef210 is same with the state(6) to be set 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 [2024-11-18 13:04:02.943402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef210 is same with the state(6) to be set 00:21:05.644 starting I/O failed: -6 00:21:05.644 [2024-11-18 13:04:02.943410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef210 is same with the state(6) to be set 00:21:05.644 [2024-11-18 13:04:02.943416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef210 is same with the state(6) to be set 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 [2024-11-18 13:04:02.943422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef210 is same with the state(6) to be set 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 [2024-11-18 13:04:02.943631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 [2024-11-18 13:04:02.943820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef6e0 is same with the state(6) to be set 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 [2024-11-18 13:04:02.943841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef6e0 is same with the state(6) to be set 00:21:05.644 starting I/O failed: -6 00:21:05.644 [2024-11-18 13:04:02.943853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef6e0 is same with the state(6) to be set 00:21:05.644 [2024-11-18 13:04:02.943859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef6e0 is same with the state(6) to be set 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 [2024-11-18 13:04:02.943865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef6e0 is same with the state(6) to be set 00:21:05.644 starting I/O failed: -6 00:21:05.644 [2024-11-18 13:04:02.943872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef6e0 is same with the state(6) to be set 00:21:05.644 [2024-11-18 13:04:02.943879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef6e0 is same with the state(6) to be set 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 [2024-11-18 13:04:02.943885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef6e0 is same with the state(6) to be set 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 [2024-11-18 13:04:02.944426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefbb0 is same with the state(6) to be set 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 [2024-11-18 13:04:02.944448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefbb0 is same with the state(6) to be set 00:21:05.644 [2024-11-18 13:04:02.944456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefbb0 is same with the state(6) to be set 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 [2024-11-18 13:04:02.944463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefbb0 is same with the state(6) to be set 00:21:05.644 starting I/O failed: -6 00:21:05.644 [2024-11-18 13:04:02.944471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefbb0 is same with the state(6) to be set 00:21:05.644 [2024-11-18 13:04:02.944478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefbb0 is same with the state(6) to be set 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 [2024-11-18 13:04:02.944484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefbb0 is same with the state(6) to be set 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 [2024-11-18 13:04:02.944639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 starting I/O failed: -6 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 [2024-11-18 13:04:02.944748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeed40 is same with starting I/O failed: -6 00:21:05.644 the state(6) to be set 00:21:05.644 [2024-11-18 13:04:02.944770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeed40 is same with the state(6) to be set 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.644 [2024-11-18 13:04:02.944778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeed40 is same with the state(6) to be set 00:21:05.644 starting I/O failed: -6 00:21:05.644 [2024-11-18 13:04:02.944784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeed40 is same with the state(6) to be set 00:21:05.644 [2024-11-18 13:04:02.944791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeed40 is same with the state(6) to be set 00:21:05.644 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 [2024-11-18 13:04:02.945270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0550 is same with the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.945282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0550 is same with the state(6) to be set 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 [2024-11-18 13:04:02.945289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0550 is same with starting I/O failed: -6 00:21:05.645 the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.945297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0550 is same with the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.945304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0550 is same with the state(6) to be set 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 [2024-11-18 13:04:02.945310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0550 is same with starting I/O failed: -6 00:21:05.645 the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.945318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0550 is same with the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.945324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0550 is same with the state(6) to be set 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 [2024-11-18 13:04:02.945330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0550 is same with the state(6) to be set 00:21:05.645 starting I/O failed: -6 00:21:05.645 [2024-11-18 13:04:02.945337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0550 is same with the state(6) to be set 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 [2024-11-18 13:04:02.945636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a20 is same with starting I/O failed: -6 00:21:05.645 the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.945651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a20 is same with the state(6) to be set 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 [2024-11-18 13:04:02.945661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a20 is same with the state(6) to be set 00:21:05.645 starting I/O failed: -6 00:21:05.645 [2024-11-18 13:04:02.945668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a20 is same with the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.945674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a20 is same with the state(6) to be set 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 [2024-11-18 13:04:02.945680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a20 is same with the state(6) to be set 00:21:05.645 starting I/O failed: -6 00:21:05.645 [2024-11-18 13:04:02.945687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a20 is same with the state(6) to be set 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 [2024-11-18 13:04:02.946079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0ef0 is same with the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.946089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0ef0 is same with the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.946096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0ef0 is same with the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.946102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0ef0 is same with the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.946108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0ef0 is same with the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.946114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0ef0 is same with the state(6) to be set 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 [2024-11-18 13:04:02.946258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.645 NVMe io qpair process completion error 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 [2024-11-18 13:04:02.946627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0080 is same with the state(6) to be set 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 [2024-11-18 13:04:02.946643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0080 is same with the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.946649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0080 is same with the state(6) to be set 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 [2024-11-18 13:04:02.946656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0080 is same with the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.946663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0080 is same with the state(6) to be set 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 [2024-11-18 13:04:02.946669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0080 is same with the state(6) to be set 00:21:05.645 [2024-11-18 13:04:02.946675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0080 is same with the state(6) to be set 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 starting I/O failed: -6 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.645 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 [2024-11-18 13:04:02.947199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.646 starting I/O failed: -6 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 [2024-11-18 13:04:02.948093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.646 Write completed with error (sct=0, sc=8) 00:21:05.646 starting I/O failed: -6 00:21:05.647 [2024-11-18 13:04:02.949143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 [2024-11-18 13:04:02.950676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.647 NVMe io qpair process completion error 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 [2024-11-18 13:04:02.951682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 starting I/O failed: -6 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.647 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 [2024-11-18 13:04:02.952566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 [2024-11-18 13:04:02.953590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.648 starting I/O failed: -6 00:21:05.648 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 [2024-11-18 13:04:02.955731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:05.649 NVMe io qpair process completion error 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 [2024-11-18 13:04:02.956806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 [2024-11-18 13:04:02.957647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.649 Write completed with error (sct=0, sc=8) 00:21:05.649 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 [2024-11-18 13:04:02.958673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 [2024-11-18 13:04:02.965532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:05.650 NVMe io qpair process completion error 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 [2024-11-18 13:04:02.966575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.650 starting I/O failed: -6 00:21:05.650 starting I/O failed: -6 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.650 Write completed with error (sct=0, sc=8) 00:21:05.650 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 [2024-11-18 13:04:02.967535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 [2024-11-18 13:04:02.968559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.651 Write completed with error (sct=0, sc=8) 00:21:05.651 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 [2024-11-18 13:04:02.970664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:05.652 NVMe io qpair process completion error 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 [2024-11-18 13:04:02.971817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 [2024-11-18 13:04:02.972776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 starting I/O failed: -6 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.652 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 [2024-11-18 13:04:02.973885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 [2024-11-18 13:04:02.975558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:05.653 NVMe io qpair process completion error 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 starting I/O failed: -6 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.653 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 [2024-11-18 13:04:02.976522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 [2024-11-18 13:04:02.977480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 [2024-11-18 13:04:02.978534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.654 Write completed with error (sct=0, sc=8) 00:21:05.654 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 [2024-11-18 13:04:02.990571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:05.655 NVMe io qpair process completion error 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 [2024-11-18 13:04:02.991956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 Write completed with error (sct=0, sc=8) 00:21:05.655 starting I/O failed: -6 00:21:05.655 [2024-11-18 13:04:02.993067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 [2024-11-18 13:04:02.994298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.656 Write completed with error (sct=0, sc=8) 00:21:05.656 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 [2024-11-18 13:04:02.996618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:05.657 NVMe io qpair process completion error 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 [2024-11-18 13:04:02.997979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 [2024-11-18 13:04:02.999106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.657 starting I/O failed: -6 00:21:05.657 Write completed with error (sct=0, sc=8) 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 [2024-11-18 13:04:03.000915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 Write completed with error (sct=0, sc=8) 00:21:05.658 starting I/O failed: -6 00:21:05.658 [2024-11-18 13:04:03.007629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:05.658 NVMe io qpair process completion error 00:21:05.658 Initializing NVMe Controllers 00:21:05.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:05.658 Controller IO queue size 128, less than required. 00:21:05.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:05.658 Controller IO queue size 128, less than required. 00:21:05.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:05.658 Controller IO queue size 128, less than required. 00:21:05.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:05.658 Controller IO queue size 128, less than required. 00:21:05.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:05.658 Controller IO queue size 128, less than required. 00:21:05.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:05.658 Controller IO queue size 128, less than required. 00:21:05.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.658 Controller IO queue size 128, less than required. 00:21:05.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:05.658 Controller IO queue size 128, less than required. 00:21:05.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:05.658 Controller IO queue size 128, less than required. 00:21:05.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:05.658 Controller IO queue size 128, less than required. 00:21:05.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:05.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:05.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:05.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:05.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:05.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:05.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:05.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:05.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:05.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:05.658 Initialization complete. Launching workers. 00:21:05.658 ======================================================== 00:21:05.658 Latency(us) 00:21:05.658 Device Information : IOPS MiB/s Average min max 00:21:05.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2192.21 94.20 58392.48 701.08 117712.96 00:21:05.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2152.65 92.50 59475.83 899.61 117026.24 00:21:05.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2157.35 92.70 59364.73 744.86 115507.25 00:21:05.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2174.25 93.42 58980.21 667.47 112014.52 00:21:05.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2162.91 92.94 59309.49 533.29 111295.70 00:21:05.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2152.01 92.47 59624.44 966.50 110164.77 00:21:05.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2157.35 92.70 58878.53 716.69 109121.61 00:21:05.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2169.12 93.20 59256.41 940.83 126816.03 00:21:05.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2150.51 92.40 59803.14 973.94 130195.50 00:21:05.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2111.58 90.73 60132.51 931.29 108367.15 00:21:05.659 ======================================================== 00:21:05.659 Total : 21579.94 927.26 59317.99 533.29 130195.50 00:21:05.659 00:21:05.659 [2024-11-18 13:04:03.014898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe34900 is same with the state(6) to be set 00:21:05.659 [2024-11-18 13:04:03.014953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32ef0 is same with the state(6) to be set 00:21:05.659 [2024-11-18 13:04:03.014991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe34ae0 is same with the state(6) to be set 00:21:05.659 [2024-11-18 13:04:03.015033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe33a70 is same with the state(6) to be set 00:21:05.659 [2024-11-18 13:04:03.015070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32890 is same with the state(6) to be set 00:21:05.659 [2024-11-18 13:04:03.015106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32560 is same with the state(6) to be set 00:21:05.659 [2024-11-18 13:04:03.015142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe34720 is same with the state(6) to be set 00:21:05.659 [2024-11-18 13:04:03.015178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe33410 is same with the state(6) to be set 00:21:05.659 [2024-11-18 13:04:03.015215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe33740 is same with the state(6) to be set 00:21:05.659 [2024-11-18 13:04:03.015251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32bc0 is same with the state(6) to be set 00:21:05.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:05.659 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:07.040 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2384209 00:21:07.040 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2384209 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 2384209 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.041 rmmod nvme_tcp 00:21:07.041 rmmod nvme_fabrics 00:21:07.041 rmmod nvme_keyring 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2384027 ']' 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2384027 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2384027 ']' 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2384027 00:21:07.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2384027) - No such process 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2384027 is not found' 00:21:07.041 Process with pid 2384027 is not found 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.041 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.950 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:08.950 00:21:08.950 real 0m9.764s 00:21:08.950 user 0m24.915s 00:21:08.950 sys 0m5.126s 00:21:08.950 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:08.950 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:08.950 ************************************ 00:21:08.950 END TEST nvmf_shutdown_tc4 00:21:08.950 ************************************ 00:21:08.950 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:08.950 00:21:08.950 real 0m40.492s 00:21:08.950 user 1m38.626s 00:21:08.950 sys 0m14.041s 00:21:08.950 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:08.950 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:08.950 ************************************ 00:21:08.950 END TEST nvmf_shutdown 00:21:08.950 ************************************ 00:21:08.950 13:04:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:08.950 13:04:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:08.950 13:04:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:08.950 13:04:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:08.950 ************************************ 00:21:08.950 START TEST nvmf_nsid 00:21:08.950 ************************************ 00:21:08.950 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:09.212 * Looking for test storage... 00:21:09.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:09.212 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:09.212 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:21:09.212 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:09.212 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:09.212 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:09.212 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:09.212 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:09.212 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.212 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:09.212 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:09.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.213 --rc genhtml_branch_coverage=1 00:21:09.213 --rc genhtml_function_coverage=1 00:21:09.213 --rc genhtml_legend=1 00:21:09.213 --rc geninfo_all_blocks=1 00:21:09.213 --rc geninfo_unexecuted_blocks=1 00:21:09.213 00:21:09.213 ' 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:09.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.213 --rc genhtml_branch_coverage=1 00:21:09.213 --rc genhtml_function_coverage=1 00:21:09.213 --rc genhtml_legend=1 00:21:09.213 --rc geninfo_all_blocks=1 00:21:09.213 --rc geninfo_unexecuted_blocks=1 00:21:09.213 00:21:09.213 ' 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:09.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.213 --rc genhtml_branch_coverage=1 00:21:09.213 --rc genhtml_function_coverage=1 00:21:09.213 --rc genhtml_legend=1 00:21:09.213 --rc geninfo_all_blocks=1 00:21:09.213 --rc geninfo_unexecuted_blocks=1 00:21:09.213 00:21:09.213 ' 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:09.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.213 --rc genhtml_branch_coverage=1 00:21:09.213 --rc genhtml_function_coverage=1 00:21:09.213 --rc genhtml_legend=1 00:21:09.213 --rc geninfo_all_blocks=1 00:21:09.213 --rc geninfo_unexecuted_blocks=1 00:21:09.213 00:21:09.213 ' 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:09.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:09.213 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:15.794 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.794 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:15.795 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:15.795 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:15.795 Found net devices under 0000:86:00.0: cvl_0_0 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:15.795 Found net devices under 0000:86:00.1: cvl_0_1 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:15.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:21:15.795 00:21:15.795 --- 10.0.0.2 ping statistics --- 00:21:15.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.795 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:21:15.795 00:21:15.795 --- 10.0.0.1 ping statistics --- 00:21:15.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.795 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:15.795 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2388673 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2388673 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 2388673 ']' 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:15.796 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:15.796 [2024-11-18 13:04:12.785626] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:21:15.796 [2024-11-18 13:04:12.785672] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.796 [2024-11-18 13:04:12.866740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.796 [2024-11-18 13:04:12.908866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.796 [2024-11-18 13:04:12.908903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.796 [2024-11-18 13:04:12.908911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.796 [2024-11-18 13:04:12.908917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.796 [2024-11-18 13:04:12.908923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.796 [2024-11-18 13:04:12.909508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2388840 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=3666095e-da43-4099-9919-5927be900193 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=987530ef-2bc0-49fe-99a3-4a80fae07cbc 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=12dbbe98-209f-4eb2-b0a5-f02d03b5666e 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:15.796 null0 00:21:15.796 null1 00:21:15.796 [2024-11-18 13:04:13.102702] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:21:15.796 [2024-11-18 13:04:13.102753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2388840 ] 00:21:15.796 null2 00:21:15.796 [2024-11-18 13:04:13.109996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.796 [2024-11-18 13:04:13.134218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.796 [2024-11-18 13:04:13.161383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2388840 /var/tmp/tgt2.sock 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 2388840 ']' 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:15.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:15.796 [2024-11-18 13:04:13.205500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:21:15.796 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:16.056 [2024-11-18 13:04:13.736988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.056 [2024-11-18 13:04:13.753108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:16.315 nvme0n1 nvme0n2 00:21:16.315 nvme1n1 00:21:16.315 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:16.315 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:16.315 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:17.255 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:17.255 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:17.255 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:17.255 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:17.255 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:21:17.255 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:17.255 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:17.255 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:17.255 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:17.255 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:21:17.255 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:21:17.255 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:21:17.255 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:21:18.193 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:18.193 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 3666095e-da43-4099-9919-5927be900193 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3666095eda43409999195927be900193 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3666095EDA43409999195927BE900193 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 3666095EDA43409999195927BE900193 == \3\6\6\6\0\9\5\E\D\A\4\3\4\0\9\9\9\9\1\9\5\9\2\7\B\E\9\0\0\1\9\3 ]] 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 987530ef-2bc0-49fe-99a3-4a80fae07cbc 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:18.453 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=987530ef2bc049fe99a34a80fae07cbc 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 987530EF2BC049FE99A34A80FAE07CBC 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 987530EF2BC049FE99A34A80FAE07CBC == \9\8\7\5\3\0\E\F\2\B\C\0\4\9\F\E\9\9\A\3\4\A\8\0\F\A\E\0\7\C\B\C ]] 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 12dbbe98-209f-4eb2-b0a5-f02d03b5666e 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=12dbbe98209f4eb2b0a5f02d03b5666e 00:21:18.453 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 12DBBE98209F4EB2B0A5F02D03B5666E 00:21:18.454 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 12DBBE98209F4EB2B0A5F02D03B5666E == \1\2\D\B\B\E\9\8\2\0\9\F\4\E\B\2\B\0\A\5\F\0\2\D\0\3\B\5\6\6\6\E ]] 00:21:18.454 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:18.714 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:18.714 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:18.714 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2388840 00:21:18.714 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 2388840 ']' 00:21:18.714 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 2388840 00:21:18.714 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:21:18.714 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:18.714 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2388840 00:21:18.714 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:18.714 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:18.714 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2388840' 00:21:18.714 killing process with pid 2388840 00:21:18.714 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 2388840 00:21:18.714 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 2388840 00:21:18.973 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:18.973 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:18.974 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:18.974 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:18.974 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:18.974 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:18.974 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:18.974 rmmod nvme_tcp 00:21:19.233 rmmod nvme_fabrics 00:21:19.233 rmmod nvme_keyring 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2388673 ']' 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2388673 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 2388673 ']' 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 2388673 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2388673 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2388673' 00:21:19.233 killing process with pid 2388673 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 2388673 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 2388673 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:19.233 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:19.234 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:19.234 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.234 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.234 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.770 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:21.770 00:21:21.770 real 0m12.391s 00:21:21.770 user 0m9.646s 00:21:21.770 sys 0m5.550s 00:21:21.770 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:21.770 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:21.770 ************************************ 00:21:21.770 END TEST nvmf_nsid 00:21:21.770 ************************************ 00:21:21.770 13:04:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:21.770 00:21:21.770 real 12m1.226s 00:21:21.770 user 25m42.126s 00:21:21.770 sys 3m44.248s 00:21:21.770 13:04:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:21.770 13:04:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:21.770 ************************************ 00:21:21.770 END TEST nvmf_target_extra 00:21:21.770 ************************************ 00:21:21.770 13:04:19 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:21.770 13:04:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:21.770 13:04:19 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:21.770 13:04:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.770 ************************************ 00:21:21.770 START TEST nvmf_host 00:21:21.770 ************************************ 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:21.770 * Looking for test storage... 00:21:21.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.770 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:21.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.771 --rc genhtml_branch_coverage=1 00:21:21.771 --rc genhtml_function_coverage=1 00:21:21.771 --rc genhtml_legend=1 00:21:21.771 --rc geninfo_all_blocks=1 00:21:21.771 --rc geninfo_unexecuted_blocks=1 00:21:21.771 00:21:21.771 ' 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:21.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.771 --rc genhtml_branch_coverage=1 00:21:21.771 --rc genhtml_function_coverage=1 00:21:21.771 --rc genhtml_legend=1 00:21:21.771 --rc geninfo_all_blocks=1 00:21:21.771 --rc geninfo_unexecuted_blocks=1 00:21:21.771 00:21:21.771 ' 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:21.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.771 --rc genhtml_branch_coverage=1 00:21:21.771 --rc genhtml_function_coverage=1 00:21:21.771 --rc genhtml_legend=1 00:21:21.771 --rc geninfo_all_blocks=1 00:21:21.771 --rc geninfo_unexecuted_blocks=1 00:21:21.771 00:21:21.771 ' 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:21.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.771 --rc genhtml_branch_coverage=1 00:21:21.771 --rc genhtml_function_coverage=1 00:21:21.771 --rc genhtml_legend=1 00:21:21.771 --rc geninfo_all_blocks=1 00:21:21.771 --rc geninfo_unexecuted_blocks=1 00:21:21.771 00:21:21.771 ' 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:21.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.771 ************************************ 00:21:21.771 START TEST nvmf_multicontroller 00:21:21.771 ************************************ 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:21.771 * Looking for test storage... 00:21:21.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:21:21.771 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:22.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.032 --rc genhtml_branch_coverage=1 00:21:22.032 --rc genhtml_function_coverage=1 00:21:22.032 --rc genhtml_legend=1 00:21:22.032 --rc geninfo_all_blocks=1 00:21:22.032 --rc geninfo_unexecuted_blocks=1 00:21:22.032 00:21:22.032 ' 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:22.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.032 --rc genhtml_branch_coverage=1 00:21:22.032 --rc genhtml_function_coverage=1 00:21:22.032 --rc genhtml_legend=1 00:21:22.032 --rc geninfo_all_blocks=1 00:21:22.032 --rc geninfo_unexecuted_blocks=1 00:21:22.032 00:21:22.032 ' 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:22.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.032 --rc genhtml_branch_coverage=1 00:21:22.032 --rc genhtml_function_coverage=1 00:21:22.032 --rc genhtml_legend=1 00:21:22.032 --rc geninfo_all_blocks=1 00:21:22.032 --rc geninfo_unexecuted_blocks=1 00:21:22.032 00:21:22.032 ' 00:21:22.032 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:22.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.032 --rc genhtml_branch_coverage=1 00:21:22.032 --rc genhtml_function_coverage=1 00:21:22.032 --rc genhtml_legend=1 00:21:22.032 --rc geninfo_all_blocks=1 00:21:22.033 --rc geninfo_unexecuted_blocks=1 00:21:22.033 00:21:22.033 ' 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:22.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:22.033 13:04:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:28.614 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:28.614 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:28.614 Found net devices under 0000:86:00.0: cvl_0_0 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:28.614 Found net devices under 0000:86:00.1: cvl_0_1 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:28.614 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:28.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:21:28.615 00:21:28.615 --- 10.0.0.2 ping statistics --- 00:21:28.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.615 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:21:28.615 00:21:28.615 --- 10.0.0.1 ping statistics --- 00:21:28.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.615 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2393005 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2393005 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 2393005 ']' 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.615 [2024-11-18 13:04:25.586989] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:21:28.615 [2024-11-18 13:04:25.587040] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.615 [2024-11-18 13:04:25.667306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:28.615 [2024-11-18 13:04:25.710377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.615 [2024-11-18 13:04:25.710414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.615 [2024-11-18 13:04:25.710421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.615 [2024-11-18 13:04:25.710427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.615 [2024-11-18 13:04:25.710433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.615 [2024-11-18 13:04:25.711896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.615 [2024-11-18 13:04:25.712008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.615 [2024-11-18 13:04:25.712008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.615 [2024-11-18 13:04:25.853075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.615 Malloc0 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.615 [2024-11-18 13:04:25.917675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.615 [2024-11-18 13:04:25.929617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.615 Malloc1 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.615 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2393153 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2393153 /var/tmp/bdevperf.sock 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 2393153 ']' 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:28.616 13:04:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.616 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:28.616 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:21:28.616 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:28.616 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.616 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.876 NVMe0n1 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.876 1 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.876 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.876 request: 00:21:28.876 { 00:21:28.876 "name": "NVMe0", 00:21:28.876 "trtype": "tcp", 00:21:28.876 "traddr": "10.0.0.2", 00:21:28.876 "adrfam": "ipv4", 00:21:28.876 "trsvcid": "4420", 00:21:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.876 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:28.876 "hostaddr": "10.0.0.1", 00:21:28.876 "prchk_reftag": false, 00:21:28.877 "prchk_guard": false, 00:21:28.877 "hdgst": false, 00:21:28.877 "ddgst": false, 00:21:28.877 "allow_unrecognized_csi": false, 00:21:28.877 "method": "bdev_nvme_attach_controller", 00:21:28.877 "req_id": 1 00:21:28.877 } 00:21:28.877 Got JSON-RPC error response 00:21:28.877 response: 00:21:28.877 { 00:21:28.877 "code": -114, 00:21:28.877 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:28.877 } 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.877 request: 00:21:28.877 { 00:21:28.877 "name": "NVMe0", 00:21:28.877 "trtype": "tcp", 00:21:28.877 "traddr": "10.0.0.2", 00:21:28.877 "adrfam": "ipv4", 00:21:28.877 "trsvcid": "4420", 00:21:28.877 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:28.877 "hostaddr": "10.0.0.1", 00:21:28.877 "prchk_reftag": false, 00:21:28.877 "prchk_guard": false, 00:21:28.877 "hdgst": false, 00:21:28.877 "ddgst": false, 00:21:28.877 "allow_unrecognized_csi": false, 00:21:28.877 "method": "bdev_nvme_attach_controller", 00:21:28.877 "req_id": 1 00:21:28.877 } 00:21:28.877 Got JSON-RPC error response 00:21:28.877 response: 00:21:28.877 { 00:21:28.877 "code": -114, 00:21:28.877 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:28.877 } 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.877 request: 00:21:28.877 { 00:21:28.877 "name": "NVMe0", 00:21:28.877 "trtype": "tcp", 00:21:28.877 "traddr": "10.0.0.2", 00:21:28.877 "adrfam": "ipv4", 00:21:28.877 "trsvcid": "4420", 00:21:28.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.877 "hostaddr": "10.0.0.1", 00:21:28.877 "prchk_reftag": false, 00:21:28.877 "prchk_guard": false, 00:21:28.877 "hdgst": false, 00:21:28.877 "ddgst": false, 00:21:28.877 "multipath": "disable", 00:21:28.877 "allow_unrecognized_csi": false, 00:21:28.877 "method": "bdev_nvme_attach_controller", 00:21:28.877 "req_id": 1 00:21:28.877 } 00:21:28.877 Got JSON-RPC error response 00:21:28.877 response: 00:21:28.877 { 00:21:28.877 "code": -114, 00:21:28.877 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:28.877 } 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:28.877 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.878 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.878 request: 00:21:28.878 { 00:21:28.878 "name": "NVMe0", 00:21:28.878 "trtype": "tcp", 00:21:28.878 "traddr": "10.0.0.2", 00:21:28.878 "adrfam": "ipv4", 00:21:28.878 "trsvcid": "4420", 00:21:28.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.878 "hostaddr": "10.0.0.1", 00:21:28.878 "prchk_reftag": false, 00:21:28.878 "prchk_guard": false, 00:21:28.878 "hdgst": false, 00:21:28.878 "ddgst": false, 00:21:28.878 "multipath": "failover", 00:21:28.878 "allow_unrecognized_csi": false, 00:21:28.878 "method": "bdev_nvme_attach_controller", 00:21:28.878 "req_id": 1 00:21:28.878 } 00:21:28.878 Got JSON-RPC error response 00:21:28.878 response: 00:21:28.878 { 00:21:28.878 "code": -114, 00:21:28.878 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:28.878 } 00:21:28.878 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:28.878 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:28.878 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:28.878 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:28.878 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:28.878 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.878 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.878 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.137 NVMe0n1 00:21:29.137 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.137 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:29.137 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.137 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.137 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.137 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:29.137 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.137 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.396 00:21:29.396 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.396 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:29.396 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:29.396 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.396 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.396 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.396 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:29.396 13:04:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:30.332 { 00:21:30.332 "results": [ 00:21:30.332 { 00:21:30.332 "job": "NVMe0n1", 00:21:30.332 "core_mask": "0x1", 00:21:30.332 "workload": "write", 00:21:30.332 "status": "finished", 00:21:30.332 "queue_depth": 128, 00:21:30.332 "io_size": 4096, 00:21:30.332 "runtime": 1.003869, 00:21:30.332 "iops": 23970.259067667197, 00:21:30.332 "mibps": 93.63382448307499, 00:21:30.332 "io_failed": 0, 00:21:30.332 "io_timeout": 0, 00:21:30.332 "avg_latency_us": 5330.0153878677165, 00:21:30.332 "min_latency_us": 2065.808695652174, 00:21:30.332 "max_latency_us": 15386.713043478261 00:21:30.332 } 00:21:30.332 ], 00:21:30.332 "core_count": 1 00:21:30.333 } 00:21:30.333 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:30.333 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.333 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2393153 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 2393153 ']' 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 2393153 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2393153 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2393153' 00:21:30.592 killing process with pid 2393153 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 2393153 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 2393153 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:30.592 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:21:30.593 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:30.593 [2024-11-18 13:04:26.035245] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:21:30.593 [2024-11-18 13:04:26.035294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393153 ] 00:21:30.593 [2024-11-18 13:04:26.112301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.593 [2024-11-18 13:04:26.155234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.593 [2024-11-18 13:04:26.887509] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 4c7f0641-7063-4ed5-8b65-81c6687efe8c already exists 00:21:30.593 [2024-11-18 13:04:26.887538] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:4c7f0641-7063-4ed5-8b65-81c6687efe8c alias for bdev NVMe1n1 00:21:30.593 [2024-11-18 13:04:26.887547] bdev_nvme.c:4656:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:30.593 Running I/O for 1 seconds... 00:21:30.593 23901.00 IOPS, 93.36 MiB/s 00:21:30.593 Latency(us) 00:21:30.593 [2024-11-18T12:04:28.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.593 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:30.593 NVMe0n1 : 1.00 23970.26 93.63 0.00 0.00 5330.02 2065.81 15386.71 00:21:30.593 [2024-11-18T12:04:28.295Z] =================================================================================================================== 00:21:30.593 [2024-11-18T12:04:28.295Z] Total : 23970.26 93.63 0.00 0.00 5330.02 2065.81 15386.71 00:21:30.593 Received shutdown signal, test time was about 1.000000 seconds 00:21:30.593 00:21:30.593 Latency(us) 00:21:30.593 [2024-11-18T12:04:28.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.593 [2024-11-18T12:04:28.295Z] =================================================================================================================== 00:21:30.593 [2024-11-18T12:04:28.295Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.593 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:30.593 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:30.852 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:30.852 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:30.852 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.852 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:30.852 rmmod nvme_tcp 00:21:30.852 rmmod nvme_fabrics 00:21:30.852 rmmod nvme_keyring 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2393005 ']' 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2393005 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 2393005 ']' 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 2393005 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2393005 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2393005' 00:21:30.853 killing process with pid 2393005 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 2393005 00:21:30.853 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 2393005 00:21:31.112 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:31.112 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:31.112 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:31.112 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:31.112 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:31.112 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:31.112 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:31.112 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:31.113 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:31.113 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.113 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.113 13:04:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.019 13:04:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:33.019 00:21:33.019 real 0m11.344s 00:21:33.019 user 0m12.772s 00:21:33.019 sys 0m5.254s 00:21:33.019 13:04:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:33.019 13:04:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:33.019 ************************************ 00:21:33.019 END TEST nvmf_multicontroller 00:21:33.019 ************************************ 00:21:33.019 13:04:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:33.019 13:04:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:33.019 13:04:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:33.019 13:04:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.279 ************************************ 00:21:33.279 START TEST nvmf_aer 00:21:33.279 ************************************ 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:33.279 * Looking for test storage... 00:21:33.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:33.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.279 --rc genhtml_branch_coverage=1 00:21:33.279 --rc genhtml_function_coverage=1 00:21:33.279 --rc genhtml_legend=1 00:21:33.279 --rc geninfo_all_blocks=1 00:21:33.279 --rc geninfo_unexecuted_blocks=1 00:21:33.279 00:21:33.279 ' 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:33.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.279 --rc genhtml_branch_coverage=1 00:21:33.279 --rc genhtml_function_coverage=1 00:21:33.279 --rc genhtml_legend=1 00:21:33.279 --rc geninfo_all_blocks=1 00:21:33.279 --rc geninfo_unexecuted_blocks=1 00:21:33.279 00:21:33.279 ' 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:33.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.279 --rc genhtml_branch_coverage=1 00:21:33.279 --rc genhtml_function_coverage=1 00:21:33.279 --rc genhtml_legend=1 00:21:33.279 --rc geninfo_all_blocks=1 00:21:33.279 --rc geninfo_unexecuted_blocks=1 00:21:33.279 00:21:33.279 ' 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:33.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.279 --rc genhtml_branch_coverage=1 00:21:33.279 --rc genhtml_function_coverage=1 00:21:33.279 --rc genhtml_legend=1 00:21:33.279 --rc geninfo_all_blocks=1 00:21:33.279 --rc geninfo_unexecuted_blocks=1 00:21:33.279 00:21:33.279 ' 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.279 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:33.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:33.280 13:04:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:39.854 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:39.854 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:39.854 Found net devices under 0000:86:00.0: cvl_0_0 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:39.854 Found net devices under 0000:86:00.1: cvl_0_1 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:39.854 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:39.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:21:39.854 00:21:39.854 --- 10.0.0.2 ping statistics --- 00:21:39.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.855 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:21:39.855 00:21:39.855 --- 10.0.0.1 ping statistics --- 00:21:39.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.855 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2397022 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2397022 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 2397022 ']' 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:39.855 13:04:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.855 [2024-11-18 13:04:36.959820] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:21:39.855 [2024-11-18 13:04:36.959865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.855 [2024-11-18 13:04:37.039637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.855 [2024-11-18 13:04:37.082786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.855 [2024-11-18 13:04:37.082824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.855 [2024-11-18 13:04:37.082832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.855 [2024-11-18 13:04:37.082838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.855 [2024-11-18 13:04:37.082844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.855 [2024-11-18 13:04:37.084422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.855 [2024-11-18 13:04:37.084532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.855 [2024-11-18 13:04:37.084637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.855 [2024-11-18 13:04:37.084638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.855 [2024-11-18 13:04:37.226176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.855 Malloc0 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.855 [2024-11-18 13:04:37.294675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.855 [ 00:21:39.855 { 00:21:39.855 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:39.855 "subtype": "Discovery", 00:21:39.855 "listen_addresses": [], 00:21:39.855 "allow_any_host": true, 00:21:39.855 "hosts": [] 00:21:39.855 }, 00:21:39.855 { 00:21:39.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.855 "subtype": "NVMe", 00:21:39.855 "listen_addresses": [ 00:21:39.855 { 00:21:39.855 "trtype": "TCP", 00:21:39.855 "adrfam": "IPv4", 00:21:39.855 "traddr": "10.0.0.2", 00:21:39.855 "trsvcid": "4420" 00:21:39.855 } 00:21:39.855 ], 00:21:39.855 "allow_any_host": true, 00:21:39.855 "hosts": [], 00:21:39.855 "serial_number": "SPDK00000000000001", 00:21:39.855 "model_number": "SPDK bdev Controller", 00:21:39.855 "max_namespaces": 2, 00:21:39.855 "min_cntlid": 1, 00:21:39.855 "max_cntlid": 65519, 00:21:39.855 "namespaces": [ 00:21:39.855 { 00:21:39.855 "nsid": 1, 00:21:39.855 "bdev_name": "Malloc0", 00:21:39.855 "name": "Malloc0", 00:21:39.855 "nguid": "BE211D3666484B66AB1ED64A4C0E0016", 00:21:39.855 "uuid": "be211d36-6648-4b66-ab1e-d64a4c0e0016" 00:21:39.855 } 00:21:39.855 ] 00:21:39.855 } 00:21:39.855 ] 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2397062 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.855 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.115 Malloc1 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.115 Asynchronous Event Request test 00:21:40.115 Attaching to 10.0.0.2 00:21:40.115 Attached to 10.0.0.2 00:21:40.115 Registering asynchronous event callbacks... 00:21:40.115 Starting namespace attribute notice tests for all controllers... 00:21:40.115 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:40.115 aer_cb - Changed Namespace 00:21:40.115 Cleaning up... 00:21:40.115 [ 00:21:40.115 { 00:21:40.115 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:40.115 "subtype": "Discovery", 00:21:40.115 "listen_addresses": [], 00:21:40.115 "allow_any_host": true, 00:21:40.115 "hosts": [] 00:21:40.115 }, 00:21:40.115 { 00:21:40.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.115 "subtype": "NVMe", 00:21:40.115 "listen_addresses": [ 00:21:40.115 { 00:21:40.115 "trtype": "TCP", 00:21:40.115 "adrfam": "IPv4", 00:21:40.115 "traddr": "10.0.0.2", 00:21:40.115 "trsvcid": "4420" 00:21:40.115 } 00:21:40.115 ], 00:21:40.115 "allow_any_host": true, 00:21:40.115 "hosts": [], 00:21:40.115 "serial_number": "SPDK00000000000001", 00:21:40.115 "model_number": "SPDK bdev Controller", 00:21:40.115 "max_namespaces": 2, 00:21:40.115 "min_cntlid": 1, 00:21:40.115 "max_cntlid": 65519, 00:21:40.115 "namespaces": [ 00:21:40.115 { 00:21:40.115 "nsid": 1, 00:21:40.115 "bdev_name": "Malloc0", 00:21:40.115 "name": "Malloc0", 00:21:40.115 "nguid": "BE211D3666484B66AB1ED64A4C0E0016", 00:21:40.115 "uuid": "be211d36-6648-4b66-ab1e-d64a4c0e0016" 00:21:40.115 }, 00:21:40.115 { 00:21:40.115 "nsid": 2, 00:21:40.115 "bdev_name": "Malloc1", 00:21:40.115 "name": "Malloc1", 00:21:40.115 "nguid": "FC20F34DC1E8468F90DA99E6C3886ECE", 00:21:40.115 "uuid": "fc20f34d-c1e8-468f-90da-99e6c3886ece" 00:21:40.115 } 00:21:40.115 ] 00:21:40.115 } 00:21:40.115 ] 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2397062 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:40.115 rmmod nvme_tcp 00:21:40.115 rmmod nvme_fabrics 00:21:40.115 rmmod nvme_keyring 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2397022 ']' 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2397022 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 2397022 ']' 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 2397022 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2397022 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2397022' 00:21:40.115 killing process with pid 2397022 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 2397022 00:21:40.115 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 2397022 00:21:40.374 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:40.374 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:40.374 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:40.374 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:40.374 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:40.374 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:40.374 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:40.375 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:40.375 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:40.375 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.375 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.375 13:04:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:42.911 00:21:42.911 real 0m9.270s 00:21:42.911 user 0m5.218s 00:21:42.911 sys 0m4.830s 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.911 ************************************ 00:21:42.911 END TEST nvmf_aer 00:21:42.911 ************************************ 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.911 ************************************ 00:21:42.911 START TEST nvmf_async_init 00:21:42.911 ************************************ 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:42.911 * Looking for test storage... 00:21:42.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:42.911 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:42.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.912 --rc genhtml_branch_coverage=1 00:21:42.912 --rc genhtml_function_coverage=1 00:21:42.912 --rc genhtml_legend=1 00:21:42.912 --rc geninfo_all_blocks=1 00:21:42.912 --rc geninfo_unexecuted_blocks=1 00:21:42.912 00:21:42.912 ' 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:42.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.912 --rc genhtml_branch_coverage=1 00:21:42.912 --rc genhtml_function_coverage=1 00:21:42.912 --rc genhtml_legend=1 00:21:42.912 --rc geninfo_all_blocks=1 00:21:42.912 --rc geninfo_unexecuted_blocks=1 00:21:42.912 00:21:42.912 ' 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:42.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.912 --rc genhtml_branch_coverage=1 00:21:42.912 --rc genhtml_function_coverage=1 00:21:42.912 --rc genhtml_legend=1 00:21:42.912 --rc geninfo_all_blocks=1 00:21:42.912 --rc geninfo_unexecuted_blocks=1 00:21:42.912 00:21:42.912 ' 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:42.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.912 --rc genhtml_branch_coverage=1 00:21:42.912 --rc genhtml_function_coverage=1 00:21:42.912 --rc genhtml_legend=1 00:21:42.912 --rc geninfo_all_blocks=1 00:21:42.912 --rc geninfo_unexecuted_blocks=1 00:21:42.912 00:21:42.912 ' 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:42.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ee11b87bd78a47c2895250035f638450 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:42.912 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.913 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.913 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.913 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:42.913 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:42.913 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:42.913 13:04:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:49.483 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.483 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:49.483 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:49.484 Found net devices under 0000:86:00.0: cvl_0_0 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:49.484 Found net devices under 0000:86:00.1: cvl_0_1 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.484 13:04:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:49.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:21:49.484 00:21:49.484 --- 10.0.0.2 ping statistics --- 00:21:49.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.484 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:21:49.484 00:21:49.484 --- 10.0.0.1 ping statistics --- 00:21:49.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.484 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2400736 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2400736 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 2400736 ']' 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.484 [2024-11-18 13:04:46.302596] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:21:49.484 [2024-11-18 13:04:46.302645] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.484 [2024-11-18 13:04:46.382168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.484 [2024-11-18 13:04:46.423848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.484 [2024-11-18 13:04:46.423885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.484 [2024-11-18 13:04:46.423892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.484 [2024-11-18 13:04:46.423898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.484 [2024-11-18 13:04:46.423903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.484 [2024-11-18 13:04:46.424464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.484 [2024-11-18 13:04:46.554915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.484 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.485 null0 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ee11b87bd78a47c2895250035f638450 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.485 [2024-11-18 13:04:46.603162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.485 nvme0n1 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.485 [ 00:21:49.485 { 00:21:49.485 "name": "nvme0n1", 00:21:49.485 "aliases": [ 00:21:49.485 "ee11b87b-d78a-47c2-8952-50035f638450" 00:21:49.485 ], 00:21:49.485 "product_name": "NVMe disk", 00:21:49.485 "block_size": 512, 00:21:49.485 "num_blocks": 2097152, 00:21:49.485 "uuid": "ee11b87b-d78a-47c2-8952-50035f638450", 00:21:49.485 "numa_id": 1, 00:21:49.485 "assigned_rate_limits": { 00:21:49.485 "rw_ios_per_sec": 0, 00:21:49.485 "rw_mbytes_per_sec": 0, 00:21:49.485 "r_mbytes_per_sec": 0, 00:21:49.485 "w_mbytes_per_sec": 0 00:21:49.485 }, 00:21:49.485 "claimed": false, 00:21:49.485 "zoned": false, 00:21:49.485 "supported_io_types": { 00:21:49.485 "read": true, 00:21:49.485 "write": true, 00:21:49.485 "unmap": false, 00:21:49.485 "flush": true, 00:21:49.485 "reset": true, 00:21:49.485 "nvme_admin": true, 00:21:49.485 "nvme_io": true, 00:21:49.485 "nvme_io_md": false, 00:21:49.485 "write_zeroes": true, 00:21:49.485 "zcopy": false, 00:21:49.485 "get_zone_info": false, 00:21:49.485 "zone_management": false, 00:21:49.485 "zone_append": false, 00:21:49.485 "compare": true, 00:21:49.485 "compare_and_write": true, 00:21:49.485 "abort": true, 00:21:49.485 "seek_hole": false, 00:21:49.485 "seek_data": false, 00:21:49.485 "copy": true, 00:21:49.485 "nvme_iov_md": false 00:21:49.485 }, 00:21:49.485 "memory_domains": [ 00:21:49.485 { 00:21:49.485 "dma_device_id": "system", 00:21:49.485 "dma_device_type": 1 00:21:49.485 } 00:21:49.485 ], 00:21:49.485 "driver_specific": { 00:21:49.485 "nvme": [ 00:21:49.485 { 00:21:49.485 "trid": { 00:21:49.485 "trtype": "TCP", 00:21:49.485 "adrfam": "IPv4", 00:21:49.485 "traddr": "10.0.0.2", 00:21:49.485 "trsvcid": "4420", 00:21:49.485 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:49.485 }, 00:21:49.485 "ctrlr_data": { 00:21:49.485 "cntlid": 1, 00:21:49.485 "vendor_id": "0x8086", 00:21:49.485 "model_number": "SPDK bdev Controller", 00:21:49.485 "serial_number": "00000000000000000000", 00:21:49.485 "firmware_revision": "25.01", 00:21:49.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.485 "oacs": { 00:21:49.485 "security": 0, 00:21:49.485 "format": 0, 00:21:49.485 "firmware": 0, 00:21:49.485 "ns_manage": 0 00:21:49.485 }, 00:21:49.485 "multi_ctrlr": true, 00:21:49.485 "ana_reporting": false 00:21:49.485 }, 00:21:49.485 "vs": { 00:21:49.485 "nvme_version": "1.3" 00:21:49.485 }, 00:21:49.485 "ns_data": { 00:21:49.485 "id": 1, 00:21:49.485 "can_share": true 00:21:49.485 } 00:21:49.485 } 00:21:49.485 ], 00:21:49.485 "mp_policy": "active_passive" 00:21:49.485 } 00:21:49.485 } 00:21:49.485 ] 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.485 [2024-11-18 13:04:46.863692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:49.485 [2024-11-18 13:04:46.863744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb30a0 (9): Bad file descriptor 00:21:49.485 [2024-11-18 13:04:46.995427] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.485 13:04:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.485 [ 00:21:49.485 { 00:21:49.485 "name": "nvme0n1", 00:21:49.485 "aliases": [ 00:21:49.485 "ee11b87b-d78a-47c2-8952-50035f638450" 00:21:49.485 ], 00:21:49.485 "product_name": "NVMe disk", 00:21:49.485 "block_size": 512, 00:21:49.485 "num_blocks": 2097152, 00:21:49.485 "uuid": "ee11b87b-d78a-47c2-8952-50035f638450", 00:21:49.485 "numa_id": 1, 00:21:49.485 "assigned_rate_limits": { 00:21:49.485 "rw_ios_per_sec": 0, 00:21:49.485 "rw_mbytes_per_sec": 0, 00:21:49.485 "r_mbytes_per_sec": 0, 00:21:49.485 "w_mbytes_per_sec": 0 00:21:49.485 }, 00:21:49.485 "claimed": false, 00:21:49.485 "zoned": false, 00:21:49.485 "supported_io_types": { 00:21:49.485 "read": true, 00:21:49.485 "write": true, 00:21:49.485 "unmap": false, 00:21:49.485 "flush": true, 00:21:49.485 "reset": true, 00:21:49.485 "nvme_admin": true, 00:21:49.485 "nvme_io": true, 00:21:49.485 "nvme_io_md": false, 00:21:49.485 "write_zeroes": true, 00:21:49.485 "zcopy": false, 00:21:49.485 "get_zone_info": false, 00:21:49.485 "zone_management": false, 00:21:49.485 "zone_append": false, 00:21:49.485 "compare": true, 00:21:49.485 "compare_and_write": true, 00:21:49.485 "abort": true, 00:21:49.485 "seek_hole": false, 00:21:49.485 "seek_data": false, 00:21:49.485 "copy": true, 00:21:49.485 "nvme_iov_md": false 00:21:49.485 }, 00:21:49.485 "memory_domains": [ 00:21:49.485 { 00:21:49.485 "dma_device_id": "system", 00:21:49.485 "dma_device_type": 1 00:21:49.485 } 00:21:49.485 ], 00:21:49.485 "driver_specific": { 00:21:49.485 "nvme": [ 00:21:49.485 { 00:21:49.485 "trid": { 00:21:49.485 "trtype": "TCP", 00:21:49.485 "adrfam": "IPv4", 00:21:49.485 "traddr": "10.0.0.2", 00:21:49.485 "trsvcid": "4420", 00:21:49.485 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:49.485 }, 00:21:49.485 "ctrlr_data": { 00:21:49.485 "cntlid": 2, 00:21:49.485 "vendor_id": "0x8086", 00:21:49.485 "model_number": "SPDK bdev Controller", 00:21:49.485 "serial_number": "00000000000000000000", 00:21:49.485 "firmware_revision": "25.01", 00:21:49.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.485 "oacs": { 00:21:49.485 "security": 0, 00:21:49.485 "format": 0, 00:21:49.485 "firmware": 0, 00:21:49.485 "ns_manage": 0 00:21:49.485 }, 00:21:49.485 "multi_ctrlr": true, 00:21:49.485 "ana_reporting": false 00:21:49.485 }, 00:21:49.485 "vs": { 00:21:49.485 "nvme_version": "1.3" 00:21:49.485 }, 00:21:49.485 "ns_data": { 00:21:49.485 "id": 1, 00:21:49.485 "can_share": true 00:21:49.485 } 00:21:49.485 } 00:21:49.485 ], 00:21:49.485 "mp_policy": "active_passive" 00:21:49.485 } 00:21:49.485 } 00:21:49.485 ] 00:21:49.485 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.485 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:49.485 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.485 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.PBw4ZpPMvo 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.PBw4ZpPMvo 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.PBw4ZpPMvo 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.486 [2024-11-18 13:04:47.068308] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.486 [2024-11-18 13:04:47.068405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.486 [2024-11-18 13:04:47.084371] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:49.486 nvme0n1 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.486 [ 00:21:49.486 { 00:21:49.486 "name": "nvme0n1", 00:21:49.486 "aliases": [ 00:21:49.486 "ee11b87b-d78a-47c2-8952-50035f638450" 00:21:49.486 ], 00:21:49.486 "product_name": "NVMe disk", 00:21:49.486 "block_size": 512, 00:21:49.486 "num_blocks": 2097152, 00:21:49.486 "uuid": "ee11b87b-d78a-47c2-8952-50035f638450", 00:21:49.486 "numa_id": 1, 00:21:49.486 "assigned_rate_limits": { 00:21:49.486 "rw_ios_per_sec": 0, 00:21:49.486 "rw_mbytes_per_sec": 0, 00:21:49.486 "r_mbytes_per_sec": 0, 00:21:49.486 "w_mbytes_per_sec": 0 00:21:49.486 }, 00:21:49.486 "claimed": false, 00:21:49.486 "zoned": false, 00:21:49.486 "supported_io_types": { 00:21:49.486 "read": true, 00:21:49.486 "write": true, 00:21:49.486 "unmap": false, 00:21:49.486 "flush": true, 00:21:49.486 "reset": true, 00:21:49.486 "nvme_admin": true, 00:21:49.486 "nvme_io": true, 00:21:49.486 "nvme_io_md": false, 00:21:49.486 "write_zeroes": true, 00:21:49.486 "zcopy": false, 00:21:49.486 "get_zone_info": false, 00:21:49.486 "zone_management": false, 00:21:49.486 "zone_append": false, 00:21:49.486 "compare": true, 00:21:49.486 "compare_and_write": true, 00:21:49.486 "abort": true, 00:21:49.486 "seek_hole": false, 00:21:49.486 "seek_data": false, 00:21:49.486 "copy": true, 00:21:49.486 "nvme_iov_md": false 00:21:49.486 }, 00:21:49.486 "memory_domains": [ 00:21:49.486 { 00:21:49.486 "dma_device_id": "system", 00:21:49.486 "dma_device_type": 1 00:21:49.486 } 00:21:49.486 ], 00:21:49.486 "driver_specific": { 00:21:49.486 "nvme": [ 00:21:49.486 { 00:21:49.486 "trid": { 00:21:49.486 "trtype": "TCP", 00:21:49.486 "adrfam": "IPv4", 00:21:49.486 "traddr": "10.0.0.2", 00:21:49.486 "trsvcid": "4421", 00:21:49.486 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:49.486 }, 00:21:49.486 "ctrlr_data": { 00:21:49.486 "cntlid": 3, 00:21:49.486 "vendor_id": "0x8086", 00:21:49.486 "model_number": "SPDK bdev Controller", 00:21:49.486 "serial_number": "00000000000000000000", 00:21:49.486 "firmware_revision": "25.01", 00:21:49.486 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.486 "oacs": { 00:21:49.486 "security": 0, 00:21:49.486 "format": 0, 00:21:49.486 "firmware": 0, 00:21:49.486 "ns_manage": 0 00:21:49.486 }, 00:21:49.486 "multi_ctrlr": true, 00:21:49.486 "ana_reporting": false 00:21:49.486 }, 00:21:49.486 "vs": { 00:21:49.486 "nvme_version": "1.3" 00:21:49.486 }, 00:21:49.486 "ns_data": { 00:21:49.486 "id": 1, 00:21:49.486 "can_share": true 00:21:49.486 } 00:21:49.486 } 00:21:49.486 ], 00:21:49.486 "mp_policy": "active_passive" 00:21:49.486 } 00:21:49.486 } 00:21:49.486 ] 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.486 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.PBw4ZpPMvo 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:49.746 rmmod nvme_tcp 00:21:49.746 rmmod nvme_fabrics 00:21:49.746 rmmod nvme_keyring 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2400736 ']' 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2400736 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 2400736 ']' 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 2400736 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2400736 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2400736' 00:21:49.746 killing process with pid 2400736 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 2400736 00:21:49.746 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 2400736 00:21:50.006 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:50.006 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:50.006 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:50.006 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:50.006 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:50.006 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:50.006 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:50.006 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.006 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:50.006 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.006 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.006 13:04:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.914 13:04:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:51.914 00:21:51.914 real 0m9.449s 00:21:51.914 user 0m3.041s 00:21:51.914 sys 0m4.835s 00:21:51.914 13:04:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:51.914 13:04:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.914 ************************************ 00:21:51.914 END TEST nvmf_async_init 00:21:51.914 ************************************ 00:21:51.914 13:04:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:51.914 13:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:51.914 13:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:51.914 13:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.174 ************************************ 00:21:52.174 START TEST dma 00:21:52.174 ************************************ 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:52.174 * Looking for test storage... 00:21:52.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:52.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.174 --rc genhtml_branch_coverage=1 00:21:52.174 --rc genhtml_function_coverage=1 00:21:52.174 --rc genhtml_legend=1 00:21:52.174 --rc geninfo_all_blocks=1 00:21:52.174 --rc geninfo_unexecuted_blocks=1 00:21:52.174 00:21:52.174 ' 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:52.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.174 --rc genhtml_branch_coverage=1 00:21:52.174 --rc genhtml_function_coverage=1 00:21:52.174 --rc genhtml_legend=1 00:21:52.174 --rc geninfo_all_blocks=1 00:21:52.174 --rc geninfo_unexecuted_blocks=1 00:21:52.174 00:21:52.174 ' 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:52.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.174 --rc genhtml_branch_coverage=1 00:21:52.174 --rc genhtml_function_coverage=1 00:21:52.174 --rc genhtml_legend=1 00:21:52.174 --rc geninfo_all_blocks=1 00:21:52.174 --rc geninfo_unexecuted_blocks=1 00:21:52.174 00:21:52.174 ' 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:52.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.174 --rc genhtml_branch_coverage=1 00:21:52.174 --rc genhtml_function_coverage=1 00:21:52.174 --rc genhtml_legend=1 00:21:52.174 --rc geninfo_all_blocks=1 00:21:52.174 --rc geninfo_unexecuted_blocks=1 00:21:52.174 00:21:52.174 ' 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.174 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:52.175 00:21:52.175 real 0m0.200s 00:21:52.175 user 0m0.124s 00:21:52.175 sys 0m0.092s 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:52.175 ************************************ 00:21:52.175 END TEST dma 00:21:52.175 ************************************ 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:52.175 13:04:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.434 ************************************ 00:21:52.434 START TEST nvmf_identify 00:21:52.434 ************************************ 00:21:52.434 13:04:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:52.434 * Looking for test storage... 00:21:52.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:52.434 13:04:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:52.434 13:04:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:21:52.434 13:04:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:52.434 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:52.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.435 --rc genhtml_branch_coverage=1 00:21:52.435 --rc genhtml_function_coverage=1 00:21:52.435 --rc genhtml_legend=1 00:21:52.435 --rc geninfo_all_blocks=1 00:21:52.435 --rc geninfo_unexecuted_blocks=1 00:21:52.435 00:21:52.435 ' 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:52.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.435 --rc genhtml_branch_coverage=1 00:21:52.435 --rc genhtml_function_coverage=1 00:21:52.435 --rc genhtml_legend=1 00:21:52.435 --rc geninfo_all_blocks=1 00:21:52.435 --rc geninfo_unexecuted_blocks=1 00:21:52.435 00:21:52.435 ' 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:52.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.435 --rc genhtml_branch_coverage=1 00:21:52.435 --rc genhtml_function_coverage=1 00:21:52.435 --rc genhtml_legend=1 00:21:52.435 --rc geninfo_all_blocks=1 00:21:52.435 --rc geninfo_unexecuted_blocks=1 00:21:52.435 00:21:52.435 ' 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:52.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.435 --rc genhtml_branch_coverage=1 00:21:52.435 --rc genhtml_function_coverage=1 00:21:52.435 --rc genhtml_legend=1 00:21:52.435 --rc geninfo_all_blocks=1 00:21:52.435 --rc geninfo_unexecuted_blocks=1 00:21:52.435 00:21:52.435 ' 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:52.435 13:04:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:59.010 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:59.010 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:59.010 Found net devices under 0000:86:00.0: cvl_0_0 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:59.010 Found net devices under 0000:86:00.1: cvl_0_1 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:59.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:21:59.010 00:21:59.010 --- 10.0.0.2 ping statistics --- 00:21:59.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.010 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:21:59.010 00:21:59.010 --- 10.0.0.1 ping statistics --- 00:21:59.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.010 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2404394 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2404394 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 2404394 ']' 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:59.010 13:04:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.010 [2024-11-18 13:04:55.878561] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:21:59.010 [2024-11-18 13:04:55.878604] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.010 [2024-11-18 13:04:55.958865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:59.010 [2024-11-18 13:04:56.004043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.010 [2024-11-18 13:04:56.004075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.010 [2024-11-18 13:04:56.004082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.010 [2024-11-18 13:04:56.004088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.010 [2024-11-18 13:04:56.004093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.010 [2024-11-18 13:04:56.005674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.010 [2024-11-18 13:04:56.005803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.010 [2024-11-18 13:04:56.005907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.010 [2024-11-18 13:04:56.005908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.271 [2024-11-18 13:04:56.724251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.271 Malloc0 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.271 [2024-11-18 13:04:56.822821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.271 [ 00:21:59.271 { 00:21:59.271 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:59.271 "subtype": "Discovery", 00:21:59.271 "listen_addresses": [ 00:21:59.271 { 00:21:59.271 "trtype": "TCP", 00:21:59.271 "adrfam": "IPv4", 00:21:59.271 "traddr": "10.0.0.2", 00:21:59.271 "trsvcid": "4420" 00:21:59.271 } 00:21:59.271 ], 00:21:59.271 "allow_any_host": true, 00:21:59.271 "hosts": [] 00:21:59.271 }, 00:21:59.271 { 00:21:59.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.271 "subtype": "NVMe", 00:21:59.271 "listen_addresses": [ 00:21:59.271 { 00:21:59.271 "trtype": "TCP", 00:21:59.271 "adrfam": "IPv4", 00:21:59.271 "traddr": "10.0.0.2", 00:21:59.271 "trsvcid": "4420" 00:21:59.271 } 00:21:59.271 ], 00:21:59.271 "allow_any_host": true, 00:21:59.271 "hosts": [], 00:21:59.271 "serial_number": "SPDK00000000000001", 00:21:59.271 "model_number": "SPDK bdev Controller", 00:21:59.271 "max_namespaces": 32, 00:21:59.271 "min_cntlid": 1, 00:21:59.271 "max_cntlid": 65519, 00:21:59.271 "namespaces": [ 00:21:59.271 { 00:21:59.271 "nsid": 1, 00:21:59.271 "bdev_name": "Malloc0", 00:21:59.271 "name": "Malloc0", 00:21:59.271 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:59.271 "eui64": "ABCDEF0123456789", 00:21:59.271 "uuid": "15b6f0a2-c0bc-4a27-b71f-d53bfdd905ab" 00:21:59.271 } 00:21:59.271 ] 00:21:59.271 } 00:21:59.271 ] 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.271 13:04:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:59.271 [2024-11-18 13:04:56.864757] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:21:59.271 [2024-11-18 13:04:56.864781] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404641 ] 00:21:59.271 [2024-11-18 13:04:56.904868] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:59.271 [2024-11-18 13:04:56.904918] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:59.271 [2024-11-18 13:04:56.904923] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:59.271 [2024-11-18 13:04:56.904935] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:59.271 [2024-11-18 13:04:56.904943] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:59.271 [2024-11-18 13:04:56.908642] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:59.271 [2024-11-18 13:04:56.908681] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1551690 0 00:21:59.271 [2024-11-18 13:04:56.915362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:59.271 [2024-11-18 13:04:56.915378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:59.271 [2024-11-18 13:04:56.915383] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:59.271 [2024-11-18 13:04:56.915386] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:59.271 [2024-11-18 13:04:56.915420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.271 [2024-11-18 13:04:56.915426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.271 [2024-11-18 13:04:56.915429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1551690) 00:21:59.272 [2024-11-18 13:04:56.915441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:59.272 [2024-11-18 13:04:56.915459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3100, cid 0, qid 0 00:21:59.272 [2024-11-18 13:04:56.922360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.272 [2024-11-18 13:04:56.922369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.272 [2024-11-18 13:04:56.922372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.922376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3100) on tqpair=0x1551690 00:21:59.272 [2024-11-18 13:04:56.922386] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:59.272 [2024-11-18 13:04:56.922393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:59.272 [2024-11-18 13:04:56.922398] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:59.272 [2024-11-18 13:04:56.922411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.922415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.922418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1551690) 00:21:59.272 [2024-11-18 13:04:56.922425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.272 [2024-11-18 13:04:56.922438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3100, cid 0, qid 0 00:21:59.272 [2024-11-18 13:04:56.922526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.272 [2024-11-18 13:04:56.922532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.272 [2024-11-18 13:04:56.922537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.922541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3100) on tqpair=0x1551690 00:21:59.272 [2024-11-18 13:04:56.922546] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:59.272 [2024-11-18 13:04:56.922552] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:59.272 [2024-11-18 13:04:56.922558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.922562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.922565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1551690) 00:21:59.272 [2024-11-18 13:04:56.922571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.272 [2024-11-18 13:04:56.922581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3100, cid 0, qid 0 00:21:59.272 [2024-11-18 13:04:56.922644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.272 [2024-11-18 13:04:56.922649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.272 [2024-11-18 13:04:56.922652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.922655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3100) on tqpair=0x1551690 00:21:59.272 [2024-11-18 13:04:56.922660] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:59.272 [2024-11-18 13:04:56.922667] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:59.272 [2024-11-18 13:04:56.922673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.922676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.922679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1551690) 00:21:59.272 [2024-11-18 13:04:56.922685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.272 [2024-11-18 13:04:56.922695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3100, cid 0, qid 0 00:21:59.272 [2024-11-18 13:04:56.922760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.272 [2024-11-18 13:04:56.922766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.272 [2024-11-18 13:04:56.922769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.922772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3100) on tqpair=0x1551690 00:21:59.272 [2024-11-18 13:04:56.922777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:59.272 [2024-11-18 13:04:56.922785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.922788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.922792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1551690) 00:21:59.272 [2024-11-18 13:04:56.922797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.272 [2024-11-18 13:04:56.922806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3100, cid 0, qid 0 00:21:59.272 [2024-11-18 13:04:56.922873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.272 [2024-11-18 13:04:56.922878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.272 [2024-11-18 13:04:56.922881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.922885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3100) on tqpair=0x1551690 00:21:59.272 [2024-11-18 13:04:56.922891] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:59.272 [2024-11-18 13:04:56.922895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:59.272 [2024-11-18 13:04:56.922901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:59.272 [2024-11-18 13:04:56.923009] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:59.272 [2024-11-18 13:04:56.923013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:59.272 [2024-11-18 13:04:56.923021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.923024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.923027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1551690) 00:21:59.272 [2024-11-18 13:04:56.923033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.272 [2024-11-18 13:04:56.923043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3100, cid 0, qid 0 00:21:59.272 [2024-11-18 13:04:56.923109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.272 [2024-11-18 13:04:56.923115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.272 [2024-11-18 13:04:56.923118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.923121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3100) on tqpair=0x1551690 00:21:59.272 [2024-11-18 13:04:56.923125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:59.272 [2024-11-18 13:04:56.923133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.923137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.923140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1551690) 00:21:59.272 [2024-11-18 13:04:56.923145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.272 [2024-11-18 13:04:56.923155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3100, cid 0, qid 0 00:21:59.272 [2024-11-18 13:04:56.923224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.272 [2024-11-18 13:04:56.923230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.272 [2024-11-18 13:04:56.923233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.923236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3100) on tqpair=0x1551690 00:21:59.272 [2024-11-18 13:04:56.923240] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:59.272 [2024-11-18 13:04:56.923244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:59.272 [2024-11-18 13:04:56.923251] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:59.272 [2024-11-18 13:04:56.923258] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:59.272 [2024-11-18 13:04:56.923266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.923270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1551690) 00:21:59.272 [2024-11-18 13:04:56.923276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.272 [2024-11-18 13:04:56.923289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3100, cid 0, qid 0 00:21:59.272 [2024-11-18 13:04:56.923400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.272 [2024-11-18 13:04:56.923406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.272 [2024-11-18 13:04:56.923409] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.923412] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1551690): datao=0, datal=4096, cccid=0 00:21:59.272 [2024-11-18 13:04:56.923417] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b3100) on tqpair(0x1551690): expected_datao=0, payload_size=4096 00:21:59.272 [2024-11-18 13:04:56.923421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.923431] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.923435] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.964417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.272 [2024-11-18 13:04:56.964427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.272 [2024-11-18 13:04:56.964430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.272 [2024-11-18 13:04:56.964434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3100) on tqpair=0x1551690 00:21:59.272 [2024-11-18 13:04:56.964442] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:59.272 [2024-11-18 13:04:56.964446] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:59.272 [2024-11-18 13:04:56.964450] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:59.273 [2024-11-18 13:04:56.964455] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:59.273 [2024-11-18 13:04:56.964463] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:59.273 [2024-11-18 13:04:56.964467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:59.273 [2024-11-18 13:04:56.964476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:59.273 [2024-11-18 13:04:56.964483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1551690) 00:21:59.273 [2024-11-18 13:04:56.964496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.273 [2024-11-18 13:04:56.964508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3100, cid 0, qid 0 00:21:59.273 [2024-11-18 13:04:56.964584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.273 [2024-11-18 13:04:56.964590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.273 [2024-11-18 13:04:56.964593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3100) on tqpair=0x1551690 00:21:59.273 [2024-11-18 13:04:56.964606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1551690) 00:21:59.273 [2024-11-18 13:04:56.964618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.273 [2024-11-18 13:04:56.964624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1551690) 00:21:59.273 [2024-11-18 13:04:56.964637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.273 [2024-11-18 13:04:56.964642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1551690) 00:21:59.273 [2024-11-18 13:04:56.964653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.273 [2024-11-18 13:04:56.964659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.273 [2024-11-18 13:04:56.964670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.273 [2024-11-18 13:04:56.964674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:59.273 [2024-11-18 13:04:56.964682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:59.273 [2024-11-18 13:04:56.964688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1551690) 00:21:59.273 [2024-11-18 13:04:56.964697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.273 [2024-11-18 13:04:56.964708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3100, cid 0, qid 0 00:21:59.273 [2024-11-18 13:04:56.964713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3280, cid 1, qid 0 00:21:59.273 [2024-11-18 13:04:56.964717] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3400, cid 2, qid 0 00:21:59.273 [2024-11-18 13:04:56.964721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.273 [2024-11-18 13:04:56.964725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3700, cid 4, qid 0 00:21:59.273 [2024-11-18 13:04:56.964825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.273 [2024-11-18 13:04:56.964831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.273 [2024-11-18 13:04:56.964834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3700) on tqpair=0x1551690 00:21:59.273 [2024-11-18 13:04:56.964844] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:59.273 [2024-11-18 13:04:56.964849] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:59.273 [2024-11-18 13:04:56.964859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1551690) 00:21:59.273 [2024-11-18 13:04:56.964868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.273 [2024-11-18 13:04:56.964878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3700, cid 4, qid 0 00:21:59.273 [2024-11-18 13:04:56.964949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.273 [2024-11-18 13:04:56.964956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.273 [2024-11-18 13:04:56.964960] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964963] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1551690): datao=0, datal=4096, cccid=4 00:21:59.273 [2024-11-18 13:04:56.964967] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b3700) on tqpair(0x1551690): expected_datao=0, payload_size=4096 00:21:59.273 [2024-11-18 13:04:56.964971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964983] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.964987] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.965030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.273 [2024-11-18 13:04:56.965035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.273 [2024-11-18 13:04:56.965038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.965042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3700) on tqpair=0x1551690 00:21:59.273 [2024-11-18 13:04:56.965052] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:59.273 [2024-11-18 13:04:56.965073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.965077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1551690) 00:21:59.273 [2024-11-18 13:04:56.965083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.273 [2024-11-18 13:04:56.965089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.965092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.965095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1551690) 00:21:59.273 [2024-11-18 13:04:56.965101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.273 [2024-11-18 13:04:56.965114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3700, cid 4, qid 0 00:21:59.273 [2024-11-18 13:04:56.965119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3880, cid 5, qid 0 00:21:59.273 [2024-11-18 13:04:56.965230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.273 [2024-11-18 13:04:56.965236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.273 [2024-11-18 13:04:56.965240] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.965243] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1551690): datao=0, datal=1024, cccid=4 00:21:59.273 [2024-11-18 13:04:56.965247] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b3700) on tqpair(0x1551690): expected_datao=0, payload_size=1024 00:21:59.273 [2024-11-18 13:04:56.965250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.965256] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.965259] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.965264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.273 [2024-11-18 13:04:56.965269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.273 [2024-11-18 13:04:56.965272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.273 [2024-11-18 13:04:56.965275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3880) on tqpair=0x1551690 00:21:59.534 [2024-11-18 13:04:57.010363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.534 [2024-11-18 13:04:57.010374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.534 [2024-11-18 13:04:57.010378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.534 [2024-11-18 13:04:57.010381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3700) on tqpair=0x1551690 00:21:59.534 [2024-11-18 13:04:57.010395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.534 [2024-11-18 13:04:57.010399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1551690) 00:21:59.534 [2024-11-18 13:04:57.010406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.534 [2024-11-18 13:04:57.010422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3700, cid 4, qid 0 00:21:59.534 [2024-11-18 13:04:57.010506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.534 [2024-11-18 13:04:57.010512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.534 [2024-11-18 13:04:57.010515] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.534 [2024-11-18 13:04:57.010518] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1551690): datao=0, datal=3072, cccid=4 00:21:59.534 [2024-11-18 13:04:57.010522] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b3700) on tqpair(0x1551690): expected_datao=0, payload_size=3072 00:21:59.534 [2024-11-18 13:04:57.010527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.534 [2024-11-18 13:04:57.010542] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.534 [2024-11-18 13:04:57.010546] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.534 [2024-11-18 13:04:57.052432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.534 [2024-11-18 13:04:57.052444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.534 [2024-11-18 13:04:57.052447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.534 [2024-11-18 13:04:57.052451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3700) on tqpair=0x1551690 00:21:59.534 [2024-11-18 13:04:57.052461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.534 [2024-11-18 13:04:57.052465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1551690) 00:21:59.534 [2024-11-18 13:04:57.052472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.534 [2024-11-18 13:04:57.052488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3700, cid 4, qid 0 00:21:59.534 [2024-11-18 13:04:57.052592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.534 [2024-11-18 13:04:57.052597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.534 [2024-11-18 13:04:57.052600] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.534 [2024-11-18 13:04:57.052604] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1551690): datao=0, datal=8, cccid=4 00:21:59.534 [2024-11-18 13:04:57.052608] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b3700) on tqpair(0x1551690): expected_datao=0, payload_size=8 00:21:59.534 [2024-11-18 13:04:57.052612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.534 [2024-11-18 13:04:57.052617] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.534 [2024-11-18 13:04:57.052621] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.534 [2024-11-18 13:04:57.098362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.534 [2024-11-18 13:04:57.098372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.534 [2024-11-18 13:04:57.098375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.534 [2024-11-18 13:04:57.098378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3700) on tqpair=0x1551690 00:21:59.534 ===================================================== 00:21:59.534 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:59.534 ===================================================== 00:21:59.534 Controller Capabilities/Features 00:21:59.534 ================================ 00:21:59.534 Vendor ID: 0000 00:21:59.534 Subsystem Vendor ID: 0000 00:21:59.534 Serial Number: .................... 00:21:59.534 Model Number: ........................................ 00:21:59.534 Firmware Version: 25.01 00:21:59.534 Recommended Arb Burst: 0 00:21:59.534 IEEE OUI Identifier: 00 00 00 00:21:59.534 Multi-path I/O 00:21:59.534 May have multiple subsystem ports: No 00:21:59.534 May have multiple controllers: No 00:21:59.534 Associated with SR-IOV VF: No 00:21:59.534 Max Data Transfer Size: 131072 00:21:59.534 Max Number of Namespaces: 0 00:21:59.534 Max Number of I/O Queues: 1024 00:21:59.534 NVMe Specification Version (VS): 1.3 00:21:59.534 NVMe Specification Version (Identify): 1.3 00:21:59.534 Maximum Queue Entries: 128 00:21:59.534 Contiguous Queues Required: Yes 00:21:59.534 Arbitration Mechanisms Supported 00:21:59.534 Weighted Round Robin: Not Supported 00:21:59.534 Vendor Specific: Not Supported 00:21:59.534 Reset Timeout: 15000 ms 00:21:59.534 Doorbell Stride: 4 bytes 00:21:59.534 NVM Subsystem Reset: Not Supported 00:21:59.534 Command Sets Supported 00:21:59.534 NVM Command Set: Supported 00:21:59.534 Boot Partition: Not Supported 00:21:59.534 Memory Page Size Minimum: 4096 bytes 00:21:59.534 Memory Page Size Maximum: 4096 bytes 00:21:59.534 Persistent Memory Region: Not Supported 00:21:59.534 Optional Asynchronous Events Supported 00:21:59.534 Namespace Attribute Notices: Not Supported 00:21:59.534 Firmware Activation Notices: Not Supported 00:21:59.534 ANA Change Notices: Not Supported 00:21:59.534 PLE Aggregate Log Change Notices: Not Supported 00:21:59.534 LBA Status Info Alert Notices: Not Supported 00:21:59.534 EGE Aggregate Log Change Notices: Not Supported 00:21:59.534 Normal NVM Subsystem Shutdown event: Not Supported 00:21:59.534 Zone Descriptor Change Notices: Not Supported 00:21:59.534 Discovery Log Change Notices: Supported 00:21:59.534 Controller Attributes 00:21:59.534 128-bit Host Identifier: Not Supported 00:21:59.534 Non-Operational Permissive Mode: Not Supported 00:21:59.534 NVM Sets: Not Supported 00:21:59.534 Read Recovery Levels: Not Supported 00:21:59.534 Endurance Groups: Not Supported 00:21:59.534 Predictable Latency Mode: Not Supported 00:21:59.534 Traffic Based Keep ALive: Not Supported 00:21:59.534 Namespace Granularity: Not Supported 00:21:59.534 SQ Associations: Not Supported 00:21:59.534 UUID List: Not Supported 00:21:59.534 Multi-Domain Subsystem: Not Supported 00:21:59.534 Fixed Capacity Management: Not Supported 00:21:59.535 Variable Capacity Management: Not Supported 00:21:59.535 Delete Endurance Group: Not Supported 00:21:59.535 Delete NVM Set: Not Supported 00:21:59.535 Extended LBA Formats Supported: Not Supported 00:21:59.535 Flexible Data Placement Supported: Not Supported 00:21:59.535 00:21:59.535 Controller Memory Buffer Support 00:21:59.535 ================================ 00:21:59.535 Supported: No 00:21:59.535 00:21:59.535 Persistent Memory Region Support 00:21:59.535 ================================ 00:21:59.535 Supported: No 00:21:59.535 00:21:59.535 Admin Command Set Attributes 00:21:59.535 ============================ 00:21:59.535 Security Send/Receive: Not Supported 00:21:59.535 Format NVM: Not Supported 00:21:59.535 Firmware Activate/Download: Not Supported 00:21:59.535 Namespace Management: Not Supported 00:21:59.535 Device Self-Test: Not Supported 00:21:59.535 Directives: Not Supported 00:21:59.535 NVMe-MI: Not Supported 00:21:59.535 Virtualization Management: Not Supported 00:21:59.535 Doorbell Buffer Config: Not Supported 00:21:59.535 Get LBA Status Capability: Not Supported 00:21:59.535 Command & Feature Lockdown Capability: Not Supported 00:21:59.535 Abort Command Limit: 1 00:21:59.535 Async Event Request Limit: 4 00:21:59.535 Number of Firmware Slots: N/A 00:21:59.535 Firmware Slot 1 Read-Only: N/A 00:21:59.535 Firmware Activation Without Reset: N/A 00:21:59.535 Multiple Update Detection Support: N/A 00:21:59.535 Firmware Update Granularity: No Information Provided 00:21:59.535 Per-Namespace SMART Log: No 00:21:59.535 Asymmetric Namespace Access Log Page: Not Supported 00:21:59.535 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:59.535 Command Effects Log Page: Not Supported 00:21:59.535 Get Log Page Extended Data: Supported 00:21:59.535 Telemetry Log Pages: Not Supported 00:21:59.535 Persistent Event Log Pages: Not Supported 00:21:59.535 Supported Log Pages Log Page: May Support 00:21:59.535 Commands Supported & Effects Log Page: Not Supported 00:21:59.535 Feature Identifiers & Effects Log Page:May Support 00:21:59.535 NVMe-MI Commands & Effects Log Page: May Support 00:21:59.535 Data Area 4 for Telemetry Log: Not Supported 00:21:59.535 Error Log Page Entries Supported: 128 00:21:59.535 Keep Alive: Not Supported 00:21:59.535 00:21:59.535 NVM Command Set Attributes 00:21:59.535 ========================== 00:21:59.535 Submission Queue Entry Size 00:21:59.535 Max: 1 00:21:59.535 Min: 1 00:21:59.535 Completion Queue Entry Size 00:21:59.535 Max: 1 00:21:59.535 Min: 1 00:21:59.535 Number of Namespaces: 0 00:21:59.535 Compare Command: Not Supported 00:21:59.535 Write Uncorrectable Command: Not Supported 00:21:59.535 Dataset Management Command: Not Supported 00:21:59.535 Write Zeroes Command: Not Supported 00:21:59.535 Set Features Save Field: Not Supported 00:21:59.535 Reservations: Not Supported 00:21:59.535 Timestamp: Not Supported 00:21:59.535 Copy: Not Supported 00:21:59.535 Volatile Write Cache: Not Present 00:21:59.535 Atomic Write Unit (Normal): 1 00:21:59.535 Atomic Write Unit (PFail): 1 00:21:59.535 Atomic Compare & Write Unit: 1 00:21:59.535 Fused Compare & Write: Supported 00:21:59.535 Scatter-Gather List 00:21:59.535 SGL Command Set: Supported 00:21:59.535 SGL Keyed: Supported 00:21:59.535 SGL Bit Bucket Descriptor: Not Supported 00:21:59.535 SGL Metadata Pointer: Not Supported 00:21:59.535 Oversized SGL: Not Supported 00:21:59.535 SGL Metadata Address: Not Supported 00:21:59.535 SGL Offset: Supported 00:21:59.535 Transport SGL Data Block: Not Supported 00:21:59.535 Replay Protected Memory Block: Not Supported 00:21:59.535 00:21:59.535 Firmware Slot Information 00:21:59.535 ========================= 00:21:59.535 Active slot: 0 00:21:59.535 00:21:59.535 00:21:59.535 Error Log 00:21:59.535 ========= 00:21:59.535 00:21:59.535 Active Namespaces 00:21:59.535 ================= 00:21:59.535 Discovery Log Page 00:21:59.535 ================== 00:21:59.535 Generation Counter: 2 00:21:59.535 Number of Records: 2 00:21:59.535 Record Format: 0 00:21:59.535 00:21:59.535 Discovery Log Entry 0 00:21:59.535 ---------------------- 00:21:59.535 Transport Type: 3 (TCP) 00:21:59.535 Address Family: 1 (IPv4) 00:21:59.535 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:59.535 Entry Flags: 00:21:59.535 Duplicate Returned Information: 1 00:21:59.535 Explicit Persistent Connection Support for Discovery: 1 00:21:59.535 Transport Requirements: 00:21:59.535 Secure Channel: Not Required 00:21:59.535 Port ID: 0 (0x0000) 00:21:59.535 Controller ID: 65535 (0xffff) 00:21:59.535 Admin Max SQ Size: 128 00:21:59.535 Transport Service Identifier: 4420 00:21:59.535 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:59.535 Transport Address: 10.0.0.2 00:21:59.535 Discovery Log Entry 1 00:21:59.535 ---------------------- 00:21:59.535 Transport Type: 3 (TCP) 00:21:59.535 Address Family: 1 (IPv4) 00:21:59.535 Subsystem Type: 2 (NVM Subsystem) 00:21:59.535 Entry Flags: 00:21:59.535 Duplicate Returned Information: 0 00:21:59.535 Explicit Persistent Connection Support for Discovery: 0 00:21:59.535 Transport Requirements: 00:21:59.535 Secure Channel: Not Required 00:21:59.535 Port ID: 0 (0x0000) 00:21:59.535 Controller ID: 65535 (0xffff) 00:21:59.535 Admin Max SQ Size: 128 00:21:59.535 Transport Service Identifier: 4420 00:21:59.535 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:59.535 Transport Address: 10.0.0.2 [2024-11-18 13:04:57.098459] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:59.535 [2024-11-18 13:04:57.098470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3100) on tqpair=0x1551690 00:21:59.535 [2024-11-18 13:04:57.098476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.535 [2024-11-18 13:04:57.098483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3280) on tqpair=0x1551690 00:21:59.535 [2024-11-18 13:04:57.098487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.535 [2024-11-18 13:04:57.098491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3400) on tqpair=0x1551690 00:21:59.535 [2024-11-18 13:04:57.098496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.535 [2024-11-18 13:04:57.098500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.535 [2024-11-18 13:04:57.098504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.535 [2024-11-18 13:04:57.098512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.535 [2024-11-18 13:04:57.098515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.535 [2024-11-18 13:04:57.098519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.535 [2024-11-18 13:04:57.098525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.535 [2024-11-18 13:04:57.098539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.535 [2024-11-18 13:04:57.098606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.535 [2024-11-18 13:04:57.098612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.535 [2024-11-18 13:04:57.098615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.535 [2024-11-18 13:04:57.098619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.535 [2024-11-18 13:04:57.098627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.535 [2024-11-18 13:04:57.098631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.535 [2024-11-18 13:04:57.098634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.535 [2024-11-18 13:04:57.098640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.535 [2024-11-18 13:04:57.098652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.535 [2024-11-18 13:04:57.098733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.535 [2024-11-18 13:04:57.098739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.535 [2024-11-18 13:04:57.098742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.535 [2024-11-18 13:04:57.098745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.535 [2024-11-18 13:04:57.098750] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:59.535 [2024-11-18 13:04:57.098754] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:59.535 [2024-11-18 13:04:57.098762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.535 [2024-11-18 13:04:57.098765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.535 [2024-11-18 13:04:57.098768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.535 [2024-11-18 13:04:57.098774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.536 [2024-11-18 13:04:57.098783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.536 [2024-11-18 13:04:57.098843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.536 [2024-11-18 13:04:57.098848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.536 [2024-11-18 13:04:57.098851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.098854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.536 [2024-11-18 13:04:57.098865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.098869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.098872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.536 [2024-11-18 13:04:57.098877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.536 [2024-11-18 13:04:57.098887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.536 [2024-11-18 13:04:57.098952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.536 [2024-11-18 13:04:57.098958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.536 [2024-11-18 13:04:57.098961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.098964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.536 [2024-11-18 13:04:57.098972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.098975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.098978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.536 [2024-11-18 13:04:57.098984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.536 [2024-11-18 13:04:57.098993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.536 [2024-11-18 13:04:57.099064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.536 [2024-11-18 13:04:57.099069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.536 [2024-11-18 13:04:57.099072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.536 [2024-11-18 13:04:57.099084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.536 [2024-11-18 13:04:57.099096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.536 [2024-11-18 13:04:57.099106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.536 [2024-11-18 13:04:57.099168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.536 [2024-11-18 13:04:57.099174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.536 [2024-11-18 13:04:57.099176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.536 [2024-11-18 13:04:57.099188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.536 [2024-11-18 13:04:57.099200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.536 [2024-11-18 13:04:57.099209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.536 [2024-11-18 13:04:57.099273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.536 [2024-11-18 13:04:57.099278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.536 [2024-11-18 13:04:57.099281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.536 [2024-11-18 13:04:57.099293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.536 [2024-11-18 13:04:57.099307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.536 [2024-11-18 13:04:57.099317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.536 [2024-11-18 13:04:57.099384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.536 [2024-11-18 13:04:57.099390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.536 [2024-11-18 13:04:57.099393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.536 [2024-11-18 13:04:57.099404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.536 [2024-11-18 13:04:57.099417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.536 [2024-11-18 13:04:57.099426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.536 [2024-11-18 13:04:57.099486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.536 [2024-11-18 13:04:57.099492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.536 [2024-11-18 13:04:57.099495] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.536 [2024-11-18 13:04:57.099506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.536 [2024-11-18 13:04:57.099518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.536 [2024-11-18 13:04:57.099527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.536 [2024-11-18 13:04:57.099592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.536 [2024-11-18 13:04:57.099597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.536 [2024-11-18 13:04:57.099600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099603] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.536 [2024-11-18 13:04:57.099611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.536 [2024-11-18 13:04:57.099624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.536 [2024-11-18 13:04:57.099633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.536 [2024-11-18 13:04:57.099703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.536 [2024-11-18 13:04:57.099708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.536 [2024-11-18 13:04:57.099711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.536 [2024-11-18 13:04:57.099724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.536 [2024-11-18 13:04:57.099738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.536 [2024-11-18 13:04:57.099748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.536 [2024-11-18 13:04:57.099808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.536 [2024-11-18 13:04:57.099813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.536 [2024-11-18 13:04:57.099816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.536 [2024-11-18 13:04:57.099828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.536 [2024-11-18 13:04:57.099840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.536 [2024-11-18 13:04:57.099849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.536 [2024-11-18 13:04:57.099914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.536 [2024-11-18 13:04:57.099920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.536 [2024-11-18 13:04:57.099923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.536 [2024-11-18 13:04:57.099934] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.099941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.536 [2024-11-18 13:04:57.099946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.536 [2024-11-18 13:04:57.099956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.536 [2024-11-18 13:04:57.100020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.536 [2024-11-18 13:04:57.100025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.536 [2024-11-18 13:04:57.100028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.100032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.536 [2024-11-18 13:04:57.100040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.536 [2024-11-18 13:04:57.100043] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.537 [2024-11-18 13:04:57.100052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.537 [2024-11-18 13:04:57.100061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.537 [2024-11-18 13:04:57.100125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.537 [2024-11-18 13:04:57.100130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.537 [2024-11-18 13:04:57.100133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100137] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.537 [2024-11-18 13:04:57.100145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.537 [2024-11-18 13:04:57.100158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.537 [2024-11-18 13:04:57.100168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.537 [2024-11-18 13:04:57.100234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.537 [2024-11-18 13:04:57.100240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.537 [2024-11-18 13:04:57.100243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.537 [2024-11-18 13:04:57.100254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.537 [2024-11-18 13:04:57.100267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.537 [2024-11-18 13:04:57.100276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.537 [2024-11-18 13:04:57.100344] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.537 [2024-11-18 13:04:57.100349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.537 [2024-11-18 13:04:57.100357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.537 [2024-11-18 13:04:57.100369] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.537 [2024-11-18 13:04:57.100381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.537 [2024-11-18 13:04:57.100392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.537 [2024-11-18 13:04:57.100455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.537 [2024-11-18 13:04:57.100460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.537 [2024-11-18 13:04:57.100463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.537 [2024-11-18 13:04:57.100475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.537 [2024-11-18 13:04:57.100487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.537 [2024-11-18 13:04:57.100496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.537 [2024-11-18 13:04:57.100568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.537 [2024-11-18 13:04:57.100573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.537 [2024-11-18 13:04:57.100576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.537 [2024-11-18 13:04:57.100589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.537 [2024-11-18 13:04:57.100601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.537 [2024-11-18 13:04:57.100612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.537 [2024-11-18 13:04:57.100671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.537 [2024-11-18 13:04:57.100676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.537 [2024-11-18 13:04:57.100679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.537 [2024-11-18 13:04:57.100691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.537 [2024-11-18 13:04:57.100703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.537 [2024-11-18 13:04:57.100713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.537 [2024-11-18 13:04:57.100782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.537 [2024-11-18 13:04:57.100788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.537 [2024-11-18 13:04:57.100790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.537 [2024-11-18 13:04:57.100802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.537 [2024-11-18 13:04:57.100815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.537 [2024-11-18 13:04:57.100825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.537 [2024-11-18 13:04:57.100888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.537 [2024-11-18 13:04:57.100893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.537 [2024-11-18 13:04:57.100896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.537 [2024-11-18 13:04:57.100908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.100914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.537 [2024-11-18 13:04:57.100920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.537 [2024-11-18 13:04:57.100929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.537 [2024-11-18 13:04:57.100992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.537 [2024-11-18 13:04:57.100998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.537 [2024-11-18 13:04:57.101001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.101004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.537 [2024-11-18 13:04:57.101012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.101015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.101018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.537 [2024-11-18 13:04:57.101024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.537 [2024-11-18 13:04:57.101035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.537 [2024-11-18 13:04:57.101103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.537 [2024-11-18 13:04:57.101108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.537 [2024-11-18 13:04:57.101111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.101114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.537 [2024-11-18 13:04:57.101123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.101126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.101130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.537 [2024-11-18 13:04:57.101135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.537 [2024-11-18 13:04:57.101145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.537 [2024-11-18 13:04:57.101206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.537 [2024-11-18 13:04:57.101212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.537 [2024-11-18 13:04:57.101215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.537 [2024-11-18 13:04:57.101218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.537 [2024-11-18 13:04:57.101226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.538 [2024-11-18 13:04:57.101238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.538 [2024-11-18 13:04:57.101248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.538 [2024-11-18 13:04:57.101317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.538 [2024-11-18 13:04:57.101323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.538 [2024-11-18 13:04:57.101326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.538 [2024-11-18 13:04:57.101338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.538 [2024-11-18 13:04:57.101350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.538 [2024-11-18 13:04:57.101364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.538 [2024-11-18 13:04:57.101428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.538 [2024-11-18 13:04:57.101433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.538 [2024-11-18 13:04:57.101436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.538 [2024-11-18 13:04:57.101447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.538 [2024-11-18 13:04:57.101460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.538 [2024-11-18 13:04:57.101469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.538 [2024-11-18 13:04:57.101532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.538 [2024-11-18 13:04:57.101537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.538 [2024-11-18 13:04:57.101540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.538 [2024-11-18 13:04:57.101552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.538 [2024-11-18 13:04:57.101564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.538 [2024-11-18 13:04:57.101574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.538 [2024-11-18 13:04:57.101644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.538 [2024-11-18 13:04:57.101650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.538 [2024-11-18 13:04:57.101653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.538 [2024-11-18 13:04:57.101664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.538 [2024-11-18 13:04:57.101677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.538 [2024-11-18 13:04:57.101686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.538 [2024-11-18 13:04:57.101755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.538 [2024-11-18 13:04:57.101760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.538 [2024-11-18 13:04:57.101763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.538 [2024-11-18 13:04:57.101775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.538 [2024-11-18 13:04:57.101787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.538 [2024-11-18 13:04:57.101797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.538 [2024-11-18 13:04:57.101861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.538 [2024-11-18 13:04:57.101867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.538 [2024-11-18 13:04:57.101870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.538 [2024-11-18 13:04:57.101882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.538 [2024-11-18 13:04:57.101894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.538 [2024-11-18 13:04:57.101904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.538 [2024-11-18 13:04:57.101963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.538 [2024-11-18 13:04:57.101971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.538 [2024-11-18 13:04:57.101975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.538 [2024-11-18 13:04:57.101986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.101992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.538 [2024-11-18 13:04:57.101998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.538 [2024-11-18 13:04:57.102007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.538 [2024-11-18 13:04:57.102066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.538 [2024-11-18 13:04:57.102072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.538 [2024-11-18 13:04:57.102075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.102078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.538 [2024-11-18 13:04:57.102086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.102090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.102093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.538 [2024-11-18 13:04:57.102099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.538 [2024-11-18 13:04:57.102108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.538 [2024-11-18 13:04:57.102171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.538 [2024-11-18 13:04:57.102176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.538 [2024-11-18 13:04:57.102179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.102182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.538 [2024-11-18 13:04:57.102190] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.102194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.102197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.538 [2024-11-18 13:04:57.102203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.538 [2024-11-18 13:04:57.102212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.538 [2024-11-18 13:04:57.102281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.538 [2024-11-18 13:04:57.102286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.538 [2024-11-18 13:04:57.102289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.102292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.538 [2024-11-18 13:04:57.102301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.102305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.102308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.538 [2024-11-18 13:04:57.102313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.538 [2024-11-18 13:04:57.102323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.538 [2024-11-18 13:04:57.106359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.538 [2024-11-18 13:04:57.106366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.538 [2024-11-18 13:04:57.106371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.106375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.538 [2024-11-18 13:04:57.106384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.106387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.538 [2024-11-18 13:04:57.106390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1551690) 00:21:59.538 [2024-11-18 13:04:57.106396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.538 [2024-11-18 13:04:57.106407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3580, cid 3, qid 0 00:21:59.538 [2024-11-18 13:04:57.106523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.539 [2024-11-18 13:04:57.106529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.539 [2024-11-18 13:04:57.106532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.106535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3580) on tqpair=0x1551690 00:21:59.539 [2024-11-18 13:04:57.106542] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:21:59.539 00:21:59.539 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:59.539 [2024-11-18 13:04:57.144423] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:21:59.539 [2024-11-18 13:04:57.144457] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404643 ] 00:21:59.539 [2024-11-18 13:04:57.184986] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:59.539 [2024-11-18 13:04:57.185030] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:59.539 [2024-11-18 13:04:57.185035] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:59.539 [2024-11-18 13:04:57.185046] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:59.539 [2024-11-18 13:04:57.185053] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:59.539 [2024-11-18 13:04:57.188537] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:59.539 [2024-11-18 13:04:57.188567] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa8b690 0 00:21:59.539 [2024-11-18 13:04:57.196363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:59.539 [2024-11-18 13:04:57.196377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:59.539 [2024-11-18 13:04:57.196380] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:59.539 [2024-11-18 13:04:57.196384] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:59.539 [2024-11-18 13:04:57.196409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.196414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.196417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa8b690) 00:21:59.539 [2024-11-18 13:04:57.196427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:59.539 [2024-11-18 13:04:57.196444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed100, cid 0, qid 0 00:21:59.539 [2024-11-18 13:04:57.204361] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.539 [2024-11-18 13:04:57.204369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.539 [2024-11-18 13:04:57.204372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.204376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed100) on tqpair=0xa8b690 00:21:59.539 [2024-11-18 13:04:57.204384] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:59.539 [2024-11-18 13:04:57.204390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:59.539 [2024-11-18 13:04:57.204394] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:59.539 [2024-11-18 13:04:57.204405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.204409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.204412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa8b690) 00:21:59.539 [2024-11-18 13:04:57.204420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.539 [2024-11-18 13:04:57.204433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed100, cid 0, qid 0 00:21:59.539 [2024-11-18 13:04:57.204602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.539 [2024-11-18 13:04:57.204608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.539 [2024-11-18 13:04:57.204611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.204614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed100) on tqpair=0xa8b690 00:21:59.539 [2024-11-18 13:04:57.204619] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:59.539 [2024-11-18 13:04:57.204625] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:59.539 [2024-11-18 13:04:57.204632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.204635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.204639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa8b690) 00:21:59.539 [2024-11-18 13:04:57.204645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.539 [2024-11-18 13:04:57.204655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed100, cid 0, qid 0 00:21:59.539 [2024-11-18 13:04:57.204751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.539 [2024-11-18 13:04:57.204756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.539 [2024-11-18 13:04:57.204759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.204763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed100) on tqpair=0xa8b690 00:21:59.539 [2024-11-18 13:04:57.204767] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:59.539 [2024-11-18 13:04:57.204774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:59.539 [2024-11-18 13:04:57.204780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.204784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.204787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa8b690) 00:21:59.539 [2024-11-18 13:04:57.204793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.539 [2024-11-18 13:04:57.204802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed100, cid 0, qid 0 00:21:59.539 [2024-11-18 13:04:57.204861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.539 [2024-11-18 13:04:57.204869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.539 [2024-11-18 13:04:57.204873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.204876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed100) on tqpair=0xa8b690 00:21:59.539 [2024-11-18 13:04:57.204880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:59.539 [2024-11-18 13:04:57.204889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.204892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.204896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa8b690) 00:21:59.539 [2024-11-18 13:04:57.204901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.539 [2024-11-18 13:04:57.204911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed100, cid 0, qid 0 00:21:59.539 [2024-11-18 13:04:57.205001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.539 [2024-11-18 13:04:57.205007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.539 [2024-11-18 13:04:57.205010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.205013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed100) on tqpair=0xa8b690 00:21:59.539 [2024-11-18 13:04:57.205017] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:59.539 [2024-11-18 13:04:57.205021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:59.539 [2024-11-18 13:04:57.205028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:59.539 [2024-11-18 13:04:57.205135] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:59.539 [2024-11-18 13:04:57.205140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:59.539 [2024-11-18 13:04:57.205146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.205150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.539 [2024-11-18 13:04:57.205153] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa8b690) 00:21:59.539 [2024-11-18 13:04:57.205158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.540 [2024-11-18 13:04:57.205169] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed100, cid 0, qid 0 00:21:59.540 [2024-11-18 13:04:57.205286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.540 [2024-11-18 13:04:57.205291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.540 [2024-11-18 13:04:57.205294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed100) on tqpair=0xa8b690 00:21:59.540 [2024-11-18 13:04:57.205302] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:59.540 [2024-11-18 13:04:57.205310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa8b690) 00:21:59.540 [2024-11-18 13:04:57.205323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.540 [2024-11-18 13:04:57.205332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed100, cid 0, qid 0 00:21:59.540 [2024-11-18 13:04:57.205435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.540 [2024-11-18 13:04:57.205441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.540 [2024-11-18 13:04:57.205445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed100) on tqpair=0xa8b690 00:21:59.540 [2024-11-18 13:04:57.205452] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:59.540 [2024-11-18 13:04:57.205456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:59.540 [2024-11-18 13:04:57.205463] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:59.540 [2024-11-18 13:04:57.205469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:59.540 [2024-11-18 13:04:57.205477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa8b690) 00:21:59.540 [2024-11-18 13:04:57.205486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.540 [2024-11-18 13:04:57.205497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed100, cid 0, qid 0 00:21:59.540 [2024-11-18 13:04:57.205583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.540 [2024-11-18 13:04:57.205589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.540 [2024-11-18 13:04:57.205592] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205595] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa8b690): datao=0, datal=4096, cccid=0 00:21:59.540 [2024-11-18 13:04:57.205599] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaed100) on tqpair(0xa8b690): expected_datao=0, payload_size=4096 00:21:59.540 [2024-11-18 13:04:57.205603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205619] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205624] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.540 [2024-11-18 13:04:57.205692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.540 [2024-11-18 13:04:57.205695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed100) on tqpair=0xa8b690 00:21:59.540 [2024-11-18 13:04:57.205704] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:59.540 [2024-11-18 13:04:57.205709] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:59.540 [2024-11-18 13:04:57.205713] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:59.540 [2024-11-18 13:04:57.205717] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:59.540 [2024-11-18 13:04:57.205723] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:59.540 [2024-11-18 13:04:57.205727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:59.540 [2024-11-18 13:04:57.205734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:59.540 [2024-11-18 13:04:57.205740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa8b690) 00:21:59.540 [2024-11-18 13:04:57.205755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.540 [2024-11-18 13:04:57.205765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed100, cid 0, qid 0 00:21:59.540 [2024-11-18 13:04:57.205826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.540 [2024-11-18 13:04:57.205832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.540 [2024-11-18 13:04:57.205835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed100) on tqpair=0xa8b690 00:21:59.540 [2024-11-18 13:04:57.205846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa8b690) 00:21:59.540 [2024-11-18 13:04:57.205858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.540 [2024-11-18 13:04:57.205863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205867] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa8b690) 00:21:59.540 [2024-11-18 13:04:57.205875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.540 [2024-11-18 13:04:57.205880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa8b690) 00:21:59.540 [2024-11-18 13:04:57.205891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.540 [2024-11-18 13:04:57.205896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa8b690) 00:21:59.540 [2024-11-18 13:04:57.205907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.540 [2024-11-18 13:04:57.205912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:59.540 [2024-11-18 13:04:57.205920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:59.540 [2024-11-18 13:04:57.205926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.205929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa8b690) 00:21:59.540 [2024-11-18 13:04:57.205935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.540 [2024-11-18 13:04:57.205946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed100, cid 0, qid 0 00:21:59.540 [2024-11-18 13:04:57.205951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed280, cid 1, qid 0 00:21:59.540 [2024-11-18 13:04:57.205955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed400, cid 2, qid 0 00:21:59.540 [2024-11-18 13:04:57.205959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed580, cid 3, qid 0 00:21:59.540 [2024-11-18 13:04:57.205963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed700, cid 4, qid 0 00:21:59.540 [2024-11-18 13:04:57.206078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.540 [2024-11-18 13:04:57.206084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.540 [2024-11-18 13:04:57.206087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.206091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed700) on tqpair=0xa8b690 00:21:59.540 [2024-11-18 13:04:57.206097] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:59.540 [2024-11-18 13:04:57.206101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:59.540 [2024-11-18 13:04:57.206108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:59.540 [2024-11-18 13:04:57.206114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:59.540 [2024-11-18 13:04:57.206119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.206123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.206126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa8b690) 00:21:59.540 [2024-11-18 13:04:57.206131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.540 [2024-11-18 13:04:57.206141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed700, cid 4, qid 0 00:21:59.540 [2024-11-18 13:04:57.206229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.540 [2024-11-18 13:04:57.206235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.540 [2024-11-18 13:04:57.206238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.540 [2024-11-18 13:04:57.206241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed700) on tqpair=0xa8b690 00:21:59.540 [2024-11-18 13:04:57.206293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:59.540 [2024-11-18 13:04:57.206303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:59.540 [2024-11-18 13:04:57.206310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.541 [2024-11-18 13:04:57.206313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa8b690) 00:21:59.541 [2024-11-18 13:04:57.206319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.541 [2024-11-18 13:04:57.206329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed700, cid 4, qid 0 00:21:59.541 [2024-11-18 13:04:57.206411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.541 [2024-11-18 13:04:57.206418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.541 [2024-11-18 13:04:57.206421] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.541 [2024-11-18 13:04:57.206424] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa8b690): datao=0, datal=4096, cccid=4 00:21:59.541 [2024-11-18 13:04:57.206428] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaed700) on tqpair(0xa8b690): expected_datao=0, payload_size=4096 00:21:59.541 [2024-11-18 13:04:57.206432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.541 [2024-11-18 13:04:57.206450] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.541 [2024-11-18 13:04:57.206453] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.801 [2024-11-18 13:04:57.247543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.801 [2024-11-18 13:04:57.247555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.801 [2024-11-18 13:04:57.247559] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.801 [2024-11-18 13:04:57.247565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed700) on tqpair=0xa8b690 00:21:59.801 [2024-11-18 13:04:57.247575] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:59.801 [2024-11-18 13:04:57.247588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:59.801 [2024-11-18 13:04:57.247598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:59.801 [2024-11-18 13:04:57.247605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.801 [2024-11-18 13:04:57.247608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa8b690) 00:21:59.801 [2024-11-18 13:04:57.247615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.801 [2024-11-18 13:04:57.247627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed700, cid 4, qid 0 00:21:59.801 [2024-11-18 13:04:57.247712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.801 [2024-11-18 13:04:57.247718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.801 [2024-11-18 13:04:57.247721] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.801 [2024-11-18 13:04:57.247724] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa8b690): datao=0, datal=4096, cccid=4 00:21:59.801 [2024-11-18 13:04:57.247728] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaed700) on tqpair(0xa8b690): expected_datao=0, payload_size=4096 00:21:59.801 [2024-11-18 13:04:57.247732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.801 [2024-11-18 13:04:57.247759] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.801 [2024-11-18 13:04:57.247763] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.801 [2024-11-18 13:04:57.291358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.801 [2024-11-18 13:04:57.291367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.801 [2024-11-18 13:04:57.291370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.801 [2024-11-18 13:04:57.291374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed700) on tqpair=0xa8b690 00:21:59.801 [2024-11-18 13:04:57.291386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:59.801 [2024-11-18 13:04:57.291396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:59.801 [2024-11-18 13:04:57.291403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.801 [2024-11-18 13:04:57.291407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa8b690) 00:21:59.802 [2024-11-18 13:04:57.291414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.802 [2024-11-18 13:04:57.291426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed700, cid 4, qid 0 00:21:59.802 [2024-11-18 13:04:57.291587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.802 [2024-11-18 13:04:57.291593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.802 [2024-11-18 13:04:57.291596] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.291599] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa8b690): datao=0, datal=4096, cccid=4 00:21:59.802 [2024-11-18 13:04:57.291603] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaed700) on tqpair(0xa8b690): expected_datao=0, payload_size=4096 00:21:59.802 [2024-11-18 13:04:57.291607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.291617] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.291623] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.333500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.802 [2024-11-18 13:04:57.333508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.802 [2024-11-18 13:04:57.333512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.333515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed700) on tqpair=0xa8b690 00:21:59.802 [2024-11-18 13:04:57.333522] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:59.802 [2024-11-18 13:04:57.333530] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:59.802 [2024-11-18 13:04:57.333538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:59.802 [2024-11-18 13:04:57.333543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:59.802 [2024-11-18 13:04:57.333548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:59.802 [2024-11-18 13:04:57.333552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:59.802 [2024-11-18 13:04:57.333557] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:59.802 [2024-11-18 13:04:57.333561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:59.802 [2024-11-18 13:04:57.333566] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:59.802 [2024-11-18 13:04:57.333579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.333583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa8b690) 00:21:59.802 [2024-11-18 13:04:57.333589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.802 [2024-11-18 13:04:57.333595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.333598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.333602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa8b690) 00:21:59.802 [2024-11-18 13:04:57.333607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.802 [2024-11-18 13:04:57.333621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed700, cid 4, qid 0 00:21:59.802 [2024-11-18 13:04:57.333626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed880, cid 5, qid 0 00:21:59.802 [2024-11-18 13:04:57.333711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.802 [2024-11-18 13:04:57.333716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.802 [2024-11-18 13:04:57.333720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.333723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed700) on tqpair=0xa8b690 00:21:59.802 [2024-11-18 13:04:57.333729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.802 [2024-11-18 13:04:57.333734] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.802 [2024-11-18 13:04:57.333737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.333740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed880) on tqpair=0xa8b690 00:21:59.802 [2024-11-18 13:04:57.333748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.333752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa8b690) 00:21:59.802 [2024-11-18 13:04:57.333760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.802 [2024-11-18 13:04:57.333769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed880, cid 5, qid 0 00:21:59.802 [2024-11-18 13:04:57.333855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.802 [2024-11-18 13:04:57.333861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.802 [2024-11-18 13:04:57.333864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.333867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed880) on tqpair=0xa8b690 00:21:59.802 [2024-11-18 13:04:57.333876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.333879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa8b690) 00:21:59.802 [2024-11-18 13:04:57.333885] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.802 [2024-11-18 13:04:57.333895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed880, cid 5, qid 0 00:21:59.802 [2024-11-18 13:04:57.333963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.802 [2024-11-18 13:04:57.333969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.802 [2024-11-18 13:04:57.333973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.333976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed880) on tqpair=0xa8b690 00:21:59.802 [2024-11-18 13:04:57.333984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.333987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa8b690) 00:21:59.802 [2024-11-18 13:04:57.333993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.802 [2024-11-18 13:04:57.334002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed880, cid 5, qid 0 00:21:59.802 [2024-11-18 13:04:57.334080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.802 [2024-11-18 13:04:57.334086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.802 [2024-11-18 13:04:57.334089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed880) on tqpair=0xa8b690 00:21:59.802 [2024-11-18 13:04:57.334107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa8b690) 00:21:59.802 [2024-11-18 13:04:57.334117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.802 [2024-11-18 13:04:57.334124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa8b690) 00:21:59.802 [2024-11-18 13:04:57.334132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.802 [2024-11-18 13:04:57.334139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa8b690) 00:21:59.802 [2024-11-18 13:04:57.334147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.802 [2024-11-18 13:04:57.334155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa8b690) 00:21:59.802 [2024-11-18 13:04:57.334164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.802 [2024-11-18 13:04:57.334177] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed880, cid 5, qid 0 00:21:59.802 [2024-11-18 13:04:57.334181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed700, cid 4, qid 0 00:21:59.802 [2024-11-18 13:04:57.334185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaeda00, cid 6, qid 0 00:21:59.802 [2024-11-18 13:04:57.334189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaedb80, cid 7, qid 0 00:21:59.802 [2024-11-18 13:04:57.334349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.802 [2024-11-18 13:04:57.334359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.802 [2024-11-18 13:04:57.334363] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334366] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa8b690): datao=0, datal=8192, cccid=5 00:21:59.802 [2024-11-18 13:04:57.334370] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaed880) on tqpair(0xa8b690): expected_datao=0, payload_size=8192 00:21:59.802 [2024-11-18 13:04:57.334374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334386] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334390] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.802 [2024-11-18 13:04:57.334403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.802 [2024-11-18 13:04:57.334406] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334409] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa8b690): datao=0, datal=512, cccid=4 00:21:59.802 [2024-11-18 13:04:57.334413] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaed700) on tqpair(0xa8b690): expected_datao=0, payload_size=512 00:21:59.802 [2024-11-18 13:04:57.334417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334422] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334425] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.802 [2024-11-18 13:04:57.334434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.802 [2024-11-18 13:04:57.334438] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334441] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa8b690): datao=0, datal=512, cccid=6 00:21:59.802 [2024-11-18 13:04:57.334445] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaeda00) on tqpair(0xa8b690): expected_datao=0, payload_size=512 00:21:59.802 [2024-11-18 13:04:57.334448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334454] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.802 [2024-11-18 13:04:57.334457] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.803 [2024-11-18 13:04:57.334461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.803 [2024-11-18 13:04:57.334466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.803 [2024-11-18 13:04:57.334469] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.803 [2024-11-18 13:04:57.334472] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa8b690): datao=0, datal=4096, cccid=7 00:21:59.803 [2024-11-18 13:04:57.334476] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaedb80) on tqpair(0xa8b690): expected_datao=0, payload_size=4096 00:21:59.803 [2024-11-18 13:04:57.334480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.803 [2024-11-18 13:04:57.334485] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.803 [2024-11-18 13:04:57.334489] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.803 [2024-11-18 13:04:57.334496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.803 [2024-11-18 13:04:57.334502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.803 [2024-11-18 13:04:57.334506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.803 [2024-11-18 13:04:57.334509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed880) on tqpair=0xa8b690 00:21:59.803 [2024-11-18 13:04:57.334519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.803 [2024-11-18 13:04:57.334524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.803 [2024-11-18 13:04:57.334527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.803 [2024-11-18 13:04:57.334530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed700) on tqpair=0xa8b690 00:21:59.803 [2024-11-18 13:04:57.334538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.803 [2024-11-18 13:04:57.334544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.803 [2024-11-18 13:04:57.334547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.803 [2024-11-18 13:04:57.334550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaeda00) on tqpair=0xa8b690 00:21:59.803 [2024-11-18 13:04:57.334556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.803 [2024-11-18 13:04:57.334561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.803 [2024-11-18 13:04:57.334564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.803 [2024-11-18 13:04:57.334567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaedb80) on tqpair=0xa8b690 00:21:59.803 ===================================================== 00:21:59.803 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:59.803 ===================================================== 00:21:59.803 Controller Capabilities/Features 00:21:59.803 ================================ 00:21:59.803 Vendor ID: 8086 00:21:59.803 Subsystem Vendor ID: 8086 00:21:59.803 Serial Number: SPDK00000000000001 00:21:59.803 Model Number: SPDK bdev Controller 00:21:59.803 Firmware Version: 25.01 00:21:59.803 Recommended Arb Burst: 6 00:21:59.803 IEEE OUI Identifier: e4 d2 5c 00:21:59.803 Multi-path I/O 00:21:59.803 May have multiple subsystem ports: Yes 00:21:59.803 May have multiple controllers: Yes 00:21:59.803 Associated with SR-IOV VF: No 00:21:59.803 Max Data Transfer Size: 131072 00:21:59.803 Max Number of Namespaces: 32 00:21:59.803 Max Number of I/O Queues: 127 00:21:59.803 NVMe Specification Version (VS): 1.3 00:21:59.803 NVMe Specification Version (Identify): 1.3 00:21:59.803 Maximum Queue Entries: 128 00:21:59.803 Contiguous Queues Required: Yes 00:21:59.803 Arbitration Mechanisms Supported 00:21:59.803 Weighted Round Robin: Not Supported 00:21:59.803 Vendor Specific: Not Supported 00:21:59.803 Reset Timeout: 15000 ms 00:21:59.803 Doorbell Stride: 4 bytes 00:21:59.803 NVM Subsystem Reset: Not Supported 00:21:59.803 Command Sets Supported 00:21:59.803 NVM Command Set: Supported 00:21:59.803 Boot Partition: Not Supported 00:21:59.803 Memory Page Size Minimum: 4096 bytes 00:21:59.803 Memory Page Size Maximum: 4096 bytes 00:21:59.803 Persistent Memory Region: Not Supported 00:21:59.803 Optional Asynchronous Events Supported 00:21:59.803 Namespace Attribute Notices: Supported 00:21:59.803 Firmware Activation Notices: Not Supported 00:21:59.803 ANA Change Notices: Not Supported 00:21:59.803 PLE Aggregate Log Change Notices: Not Supported 00:21:59.803 LBA Status Info Alert Notices: Not Supported 00:21:59.803 EGE Aggregate Log Change Notices: Not Supported 00:21:59.803 Normal NVM Subsystem Shutdown event: Not Supported 00:21:59.803 Zone Descriptor Change Notices: Not Supported 00:21:59.803 Discovery Log Change Notices: Not Supported 00:21:59.803 Controller Attributes 00:21:59.803 128-bit Host Identifier: Supported 00:21:59.803 Non-Operational Permissive Mode: Not Supported 00:21:59.803 NVM Sets: Not Supported 00:21:59.803 Read Recovery Levels: Not Supported 00:21:59.803 Endurance Groups: Not Supported 00:21:59.803 Predictable Latency Mode: Not Supported 00:21:59.803 Traffic Based Keep ALive: Not Supported 00:21:59.803 Namespace Granularity: Not Supported 00:21:59.803 SQ Associations: Not Supported 00:21:59.803 UUID List: Not Supported 00:21:59.803 Multi-Domain Subsystem: Not Supported 00:21:59.803 Fixed Capacity Management: Not Supported 00:21:59.803 Variable Capacity Management: Not Supported 00:21:59.803 Delete Endurance Group: Not Supported 00:21:59.803 Delete NVM Set: Not Supported 00:21:59.803 Extended LBA Formats Supported: Not Supported 00:21:59.803 Flexible Data Placement Supported: Not Supported 00:21:59.803 00:21:59.803 Controller Memory Buffer Support 00:21:59.803 ================================ 00:21:59.803 Supported: No 00:21:59.803 00:21:59.803 Persistent Memory Region Support 00:21:59.803 ================================ 00:21:59.803 Supported: No 00:21:59.803 00:21:59.803 Admin Command Set Attributes 00:21:59.803 ============================ 00:21:59.803 Security Send/Receive: Not Supported 00:21:59.803 Format NVM: Not Supported 00:21:59.803 Firmware Activate/Download: Not Supported 00:21:59.803 Namespace Management: Not Supported 00:21:59.803 Device Self-Test: Not Supported 00:21:59.803 Directives: Not Supported 00:21:59.803 NVMe-MI: Not Supported 00:21:59.803 Virtualization Management: Not Supported 00:21:59.803 Doorbell Buffer Config: Not Supported 00:21:59.803 Get LBA Status Capability: Not Supported 00:21:59.803 Command & Feature Lockdown Capability: Not Supported 00:21:59.803 Abort Command Limit: 4 00:21:59.803 Async Event Request Limit: 4 00:21:59.803 Number of Firmware Slots: N/A 00:21:59.803 Firmware Slot 1 Read-Only: N/A 00:21:59.803 Firmware Activation Without Reset: N/A 00:21:59.803 Multiple Update Detection Support: N/A 00:21:59.803 Firmware Update Granularity: No Information Provided 00:21:59.803 Per-Namespace SMART Log: No 00:21:59.803 Asymmetric Namespace Access Log Page: Not Supported 00:21:59.803 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:59.803 Command Effects Log Page: Supported 00:21:59.803 Get Log Page Extended Data: Supported 00:21:59.803 Telemetry Log Pages: Not Supported 00:21:59.803 Persistent Event Log Pages: Not Supported 00:21:59.803 Supported Log Pages Log Page: May Support 00:21:59.803 Commands Supported & Effects Log Page: Not Supported 00:21:59.803 Feature Identifiers & Effects Log Page:May Support 00:21:59.803 NVMe-MI Commands & Effects Log Page: May Support 00:21:59.803 Data Area 4 for Telemetry Log: Not Supported 00:21:59.803 Error Log Page Entries Supported: 128 00:21:59.803 Keep Alive: Supported 00:21:59.803 Keep Alive Granularity: 10000 ms 00:21:59.803 00:21:59.803 NVM Command Set Attributes 00:21:59.803 ========================== 00:21:59.803 Submission Queue Entry Size 00:21:59.803 Max: 64 00:21:59.803 Min: 64 00:21:59.803 Completion Queue Entry Size 00:21:59.803 Max: 16 00:21:59.803 Min: 16 00:21:59.803 Number of Namespaces: 32 00:21:59.803 Compare Command: Supported 00:21:59.803 Write Uncorrectable Command: Not Supported 00:21:59.804 Dataset Management Command: Supported 00:21:59.804 Write Zeroes Command: Supported 00:21:59.804 Set Features Save Field: Not Supported 00:21:59.804 Reservations: Supported 00:21:59.804 Timestamp: Not Supported 00:21:59.804 Copy: Supported 00:21:59.804 Volatile Write Cache: Present 00:21:59.804 Atomic Write Unit (Normal): 1 00:21:59.804 Atomic Write Unit (PFail): 1 00:21:59.804 Atomic Compare & Write Unit: 1 00:21:59.804 Fused Compare & Write: Supported 00:21:59.804 Scatter-Gather List 00:21:59.804 SGL Command Set: Supported 00:21:59.804 SGL Keyed: Supported 00:21:59.804 SGL Bit Bucket Descriptor: Not Supported 00:21:59.804 SGL Metadata Pointer: Not Supported 00:21:59.804 Oversized SGL: Not Supported 00:21:59.804 SGL Metadata Address: Not Supported 00:21:59.804 SGL Offset: Supported 00:21:59.804 Transport SGL Data Block: Not Supported 00:21:59.804 Replay Protected Memory Block: Not Supported 00:21:59.804 00:21:59.804 Firmware Slot Information 00:21:59.804 ========================= 00:21:59.804 Active slot: 1 00:21:59.804 Slot 1 Firmware Revision: 25.01 00:21:59.804 00:21:59.804 00:21:59.804 Commands Supported and Effects 00:21:59.804 ============================== 00:21:59.804 Admin Commands 00:21:59.804 -------------- 00:21:59.804 Get Log Page (02h): Supported 00:21:59.804 Identify (06h): Supported 00:21:59.804 Abort (08h): Supported 00:21:59.804 Set Features (09h): Supported 00:21:59.804 Get Features (0Ah): Supported 00:21:59.804 Asynchronous Event Request (0Ch): Supported 00:21:59.804 Keep Alive (18h): Supported 00:21:59.804 I/O Commands 00:21:59.804 ------------ 00:21:59.804 Flush (00h): Supported LBA-Change 00:21:59.804 Write (01h): Supported LBA-Change 00:21:59.804 Read (02h): Supported 00:21:59.804 Compare (05h): Supported 00:21:59.804 Write Zeroes (08h): Supported LBA-Change 00:21:59.804 Dataset Management (09h): Supported LBA-Change 00:21:59.804 Copy (19h): Supported LBA-Change 00:21:59.804 00:21:59.804 Error Log 00:21:59.804 ========= 00:21:59.804 00:21:59.804 Arbitration 00:21:59.805 =========== 00:21:59.805 Arbitration Burst: 1 00:21:59.805 00:21:59.805 Power Management 00:21:59.805 ================ 00:21:59.805 Number of Power States: 1 00:21:59.805 Current Power State: Power State #0 00:21:59.805 Power State #0: 00:21:59.805 Max Power: 0.00 W 00:21:59.805 Non-Operational State: Operational 00:21:59.805 Entry Latency: Not Reported 00:21:59.805 Exit Latency: Not Reported 00:21:59.805 Relative Read Throughput: 0 00:21:59.805 Relative Read Latency: 0 00:21:59.805 Relative Write Throughput: 0 00:21:59.805 Relative Write Latency: 0 00:21:59.805 Idle Power: Not Reported 00:21:59.805 Active Power: Not Reported 00:21:59.805 Non-Operational Permissive Mode: Not Supported 00:21:59.805 00:21:59.805 Health Information 00:21:59.805 ================== 00:21:59.805 Critical Warnings: 00:21:59.805 Available Spare Space: OK 00:21:59.805 Temperature: OK 00:21:59.805 Device Reliability: OK 00:21:59.805 Read Only: No 00:21:59.805 Volatile Memory Backup: OK 00:21:59.805 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:59.805 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:59.805 Available Spare: 0% 00:21:59.805 Available Spare Threshold: 0% 00:21:59.805 Life Percentage Used:[2024-11-18 13:04:57.334649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.805 [2024-11-18 13:04:57.334654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa8b690) 00:21:59.805 [2024-11-18 13:04:57.334660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.805 [2024-11-18 13:04:57.334671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaedb80, cid 7, qid 0 00:21:59.805 [2024-11-18 13:04:57.334759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.806 [2024-11-18 13:04:57.334764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.806 [2024-11-18 13:04:57.334767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.806 [2024-11-18 13:04:57.334771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaedb80) on tqpair=0xa8b690 00:21:59.806 [2024-11-18 13:04:57.334798] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:59.806 [2024-11-18 13:04:57.334808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed100) on tqpair=0xa8b690 00:21:59.806 [2024-11-18 13:04:57.334813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.806 [2024-11-18 13:04:57.334817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed280) on tqpair=0xa8b690 00:21:59.806 [2024-11-18 13:04:57.334821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.806 [2024-11-18 13:04:57.334826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed400) on tqpair=0xa8b690 00:21:59.806 [2024-11-18 13:04:57.334830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.806 [2024-11-18 13:04:57.334834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed580) on tqpair=0xa8b690 00:21:59.806 [2024-11-18 13:04:57.334838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.806 [2024-11-18 13:04:57.334845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.806 [2024-11-18 13:04:57.334849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.806 [2024-11-18 13:04:57.334852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa8b690) 00:21:59.806 [2024-11-18 13:04:57.334859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.806 [2024-11-18 13:04:57.334871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed580, cid 3, qid 0 00:21:59.806 [2024-11-18 13:04:57.334939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.806 [2024-11-18 13:04:57.334945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.807 [2024-11-18 13:04:57.334948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.807 [2024-11-18 13:04:57.334951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed580) on tqpair=0xa8b690 00:21:59.807 [2024-11-18 13:04:57.334958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.807 [2024-11-18 13:04:57.334961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.807 [2024-11-18 13:04:57.334964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa8b690) 00:21:59.807 [2024-11-18 13:04:57.334970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.807 [2024-11-18 13:04:57.334982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed580, cid 3, qid 0 00:21:59.807 [2024-11-18 13:04:57.335055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.807 [2024-11-18 13:04:57.335060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.807 [2024-11-18 13:04:57.335063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.807 [2024-11-18 13:04:57.335067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed580) on tqpair=0xa8b690 00:21:59.807 [2024-11-18 13:04:57.335071] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:59.807 [2024-11-18 13:04:57.335075] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:59.807 [2024-11-18 13:04:57.335083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.807 [2024-11-18 13:04:57.335086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.807 [2024-11-18 13:04:57.335089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa8b690) 00:21:59.807 [2024-11-18 13:04:57.335095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.807 [2024-11-18 13:04:57.335105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed580, cid 3, qid 0 00:21:59.807 [2024-11-18 13:04:57.335174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.807 [2024-11-18 13:04:57.335179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.807 [2024-11-18 13:04:57.335182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.807 [2024-11-18 13:04:57.335186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed580) on tqpair=0xa8b690 00:21:59.807 [2024-11-18 13:04:57.335194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.807 [2024-11-18 13:04:57.335198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.807 [2024-11-18 13:04:57.335201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa8b690) 00:21:59.807 [2024-11-18 13:04:57.335207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.807 [2024-11-18 13:04:57.335216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed580, cid 3, qid 0 00:21:59.808 [2024-11-18 13:04:57.335290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.808 [2024-11-18 13:04:57.335296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.808 [2024-11-18 13:04:57.335299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.808 [2024-11-18 13:04:57.335302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed580) on tqpair=0xa8b690 00:21:59.808 [2024-11-18 13:04:57.335310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.808 [2024-11-18 13:04:57.335314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.808 [2024-11-18 13:04:57.335318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa8b690) 00:21:59.808 [2024-11-18 13:04:57.335324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.808 [2024-11-18 13:04:57.335334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed580, cid 3, qid 0 00:21:59.808 [2024-11-18 13:04:57.339360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.808 [2024-11-18 13:04:57.339367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.808 [2024-11-18 13:04:57.339370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.808 [2024-11-18 13:04:57.339374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed580) on tqpair=0xa8b690 00:21:59.808 [2024-11-18 13:04:57.339383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.808 [2024-11-18 13:04:57.339387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.808 [2024-11-18 13:04:57.339390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa8b690) 00:21:59.808 [2024-11-18 13:04:57.339396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.808 [2024-11-18 13:04:57.339407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaed580, cid 3, qid 0 00:21:59.808 [2024-11-18 13:04:57.339541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.809 [2024-11-18 13:04:57.339547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.809 [2024-11-18 13:04:57.339550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.809 [2024-11-18 13:04:57.339553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaed580) on tqpair=0xa8b690 00:21:59.809 [2024-11-18 13:04:57.339561] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:21:59.809 0% 00:21:59.809 Data Units Read: 0 00:21:59.809 Data Units Written: 0 00:21:59.809 Host Read Commands: 0 00:21:59.809 Host Write Commands: 0 00:21:59.809 Controller Busy Time: 0 minutes 00:21:59.809 Power Cycles: 0 00:21:59.809 Power On Hours: 0 hours 00:21:59.809 Unsafe Shutdowns: 0 00:21:59.809 Unrecoverable Media Errors: 0 00:21:59.809 Lifetime Error Log Entries: 0 00:21:59.809 Warning Temperature Time: 0 minutes 00:21:59.809 Critical Temperature Time: 0 minutes 00:21:59.809 00:21:59.809 Number of Queues 00:21:59.809 ================ 00:21:59.809 Number of I/O Submission Queues: 127 00:21:59.809 Number of I/O Completion Queues: 127 00:21:59.809 00:21:59.809 Active Namespaces 00:21:59.809 ================= 00:21:59.809 Namespace ID:1 00:21:59.809 Error Recovery Timeout: Unlimited 00:21:59.809 Command Set Identifier: NVM (00h) 00:21:59.809 Deallocate: Supported 00:21:59.810 Deallocated/Unwritten Error: Not Supported 00:21:59.810 Deallocated Read Value: Unknown 00:21:59.810 Deallocate in Write Zeroes: Not Supported 00:21:59.810 Deallocated Guard Field: 0xFFFF 00:21:59.810 Flush: Supported 00:21:59.810 Reservation: Supported 00:21:59.810 Namespace Sharing Capabilities: Multiple Controllers 00:21:59.810 Size (in LBAs): 131072 (0GiB) 00:21:59.810 Capacity (in LBAs): 131072 (0GiB) 00:21:59.810 Utilization (in LBAs): 131072 (0GiB) 00:21:59.810 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:59.810 EUI64: ABCDEF0123456789 00:21:59.810 UUID: 15b6f0a2-c0bc-4a27-b71f-d53bfdd905ab 00:21:59.810 Thin Provisioning: Not Supported 00:21:59.810 Per-NS Atomic Units: Yes 00:21:59.810 Atomic Boundary Size (Normal): 0 00:21:59.810 Atomic Boundary Size (PFail): 0 00:21:59.810 Atomic Boundary Offset: 0 00:21:59.810 Maximum Single Source Range Length: 65535 00:21:59.810 Maximum Copy Length: 65535 00:21:59.810 Maximum Source Range Count: 1 00:21:59.810 NGUID/EUI64 Never Reused: No 00:21:59.810 Namespace Write Protected: No 00:21:59.810 Number of LBA Formats: 1 00:21:59.810 Current LBA Format: LBA Format #00 00:21:59.810 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:59.810 00:21:59.810 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:59.810 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:59.810 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.811 rmmod nvme_tcp 00:21:59.811 rmmod nvme_fabrics 00:21:59.811 rmmod nvme_keyring 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2404394 ']' 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2404394 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 2404394 ']' 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 2404394 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2404394 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:59.811 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2404394' 00:21:59.812 killing process with pid 2404394 00:21:59.812 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 2404394 00:21:59.812 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 2404394 00:22:00.074 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:00.074 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:00.074 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:00.074 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:00.074 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:00.074 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:00.074 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:00.074 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:00.074 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:00.074 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.074 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.074 13:04:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.611 13:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:02.611 00:22:02.611 real 0m9.847s 00:22:02.611 user 0m8.380s 00:22:02.611 sys 0m4.740s 00:22:02.611 13:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:02.611 13:04:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:02.612 ************************************ 00:22:02.612 END TEST nvmf_identify 00:22:02.612 ************************************ 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.612 ************************************ 00:22:02.612 START TEST nvmf_perf 00:22:02.612 ************************************ 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:02.612 * Looking for test storage... 00:22:02.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:02.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.612 --rc genhtml_branch_coverage=1 00:22:02.612 --rc genhtml_function_coverage=1 00:22:02.612 --rc genhtml_legend=1 00:22:02.612 --rc geninfo_all_blocks=1 00:22:02.612 --rc geninfo_unexecuted_blocks=1 00:22:02.612 00:22:02.612 ' 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:02.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.612 --rc genhtml_branch_coverage=1 00:22:02.612 --rc genhtml_function_coverage=1 00:22:02.612 --rc genhtml_legend=1 00:22:02.612 --rc geninfo_all_blocks=1 00:22:02.612 --rc geninfo_unexecuted_blocks=1 00:22:02.612 00:22:02.612 ' 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:02.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.612 --rc genhtml_branch_coverage=1 00:22:02.612 --rc genhtml_function_coverage=1 00:22:02.612 --rc genhtml_legend=1 00:22:02.612 --rc geninfo_all_blocks=1 00:22:02.612 --rc geninfo_unexecuted_blocks=1 00:22:02.612 00:22:02.612 ' 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:02.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.612 --rc genhtml_branch_coverage=1 00:22:02.612 --rc genhtml_function_coverage=1 00:22:02.612 --rc genhtml_legend=1 00:22:02.612 --rc geninfo_all_blocks=1 00:22:02.612 --rc geninfo_unexecuted_blocks=1 00:22:02.612 00:22:02.612 ' 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.612 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:02.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:02.613 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:02.613 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:02.613 13:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:02.613 13:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:09.187 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:09.187 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:09.187 Found net devices under 0000:86:00.0: cvl_0_0 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:09.187 Found net devices under 0000:86:00.1: cvl_0_1 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.187 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:22:09.188 00:22:09.188 --- 10.0.0.2 ping statistics --- 00:22:09.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.188 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:22:09.188 00:22:09.188 --- 10.0.0.1 ping statistics --- 00:22:09.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.188 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2408171 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2408171 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 2408171 ']' 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:09.188 13:05:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:09.188 [2024-11-18 13:05:06.051730] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:22:09.188 [2024-11-18 13:05:06.051774] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.188 [2024-11-18 13:05:06.131945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.188 [2024-11-18 13:05:06.174405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.188 [2024-11-18 13:05:06.174459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.188 [2024-11-18 13:05:06.174467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.188 [2024-11-18 13:05:06.174473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.188 [2024-11-18 13:05:06.174479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.188 [2024-11-18 13:05:06.176101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.188 [2024-11-18 13:05:06.176209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.188 [2024-11-18 13:05:06.176320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.188 [2024-11-18 13:05:06.176320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.188 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:09.188 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:22:09.188 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:09.188 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:09.188 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:09.188 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.188 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:09.188 13:05:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:11.728 13:05:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:11.728 13:05:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:11.986 13:05:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:11.986 13:05:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:12.244 13:05:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:12.244 13:05:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:12.244 13:05:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:12.244 13:05:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:12.244 13:05:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:12.502 [2024-11-18 13:05:09.955278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.502 13:05:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.502 13:05:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:12.502 13:05:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:12.760 13:05:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:12.760 13:05:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:13.019 13:05:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.277 [2024-11-18 13:05:10.786447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.277 13:05:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:13.535 13:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:13.535 13:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:13.535 13:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:13.535 13:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:14.911 Initializing NVMe Controllers 00:22:14.911 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:14.911 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:14.911 Initialization complete. Launching workers. 00:22:14.911 ======================================================== 00:22:14.911 Latency(us) 00:22:14.911 Device Information : IOPS MiB/s Average min max 00:22:14.911 PCIE (0000:5e:00.0) NSID 1 from core 0: 96994.22 378.88 329.28 24.68 4458.15 00:22:14.911 ======================================================== 00:22:14.911 Total : 96994.22 378.88 329.28 24.68 4458.15 00:22:14.911 00:22:14.911 13:05:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:15.849 Initializing NVMe Controllers 00:22:15.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:15.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:15.849 Initialization complete. Launching workers. 00:22:15.849 ======================================================== 00:22:15.849 Latency(us) 00:22:15.849 Device Information : IOPS MiB/s Average min max 00:22:15.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 139.00 0.54 7301.33 108.84 45682.67 00:22:15.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.00 0.16 24481.06 7958.01 47885.39 00:22:15.849 ======================================================== 00:22:15.849 Total : 180.00 0.70 11214.49 108.84 47885.39 00:22:15.849 00:22:16.108 13:05:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:17.487 Initializing NVMe Controllers 00:22:17.487 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:17.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:17.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:17.487 Initialization complete. Launching workers. 00:22:17.487 ======================================================== 00:22:17.487 Latency(us) 00:22:17.487 Device Information : IOPS MiB/s Average min max 00:22:17.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10882.16 42.51 2939.68 392.73 9824.41 00:22:17.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3832.01 14.97 8392.99 7101.08 47718.04 00:22:17.487 ======================================================== 00:22:17.487 Total : 14714.18 57.48 4359.89 392.73 47718.04 00:22:17.487 00:22:17.487 13:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:17.487 13:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:17.487 13:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:20.023 Initializing NVMe Controllers 00:22:20.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:20.023 Controller IO queue size 128, less than required. 00:22:20.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:20.023 Controller IO queue size 128, less than required. 00:22:20.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:20.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:20.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:20.023 Initialization complete. Launching workers. 00:22:20.023 ======================================================== 00:22:20.023 Latency(us) 00:22:20.023 Device Information : IOPS MiB/s Average min max 00:22:20.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1734.89 433.72 74723.66 49939.78 128992.27 00:22:20.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.96 150.99 219201.72 72997.40 326015.70 00:22:20.023 ======================================================== 00:22:20.023 Total : 2338.86 584.71 112032.23 49939.78 326015.70 00:22:20.023 00:22:20.023 13:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:20.023 No valid NVMe controllers or AIO or URING devices found 00:22:20.023 Initializing NVMe Controllers 00:22:20.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:20.023 Controller IO queue size 128, less than required. 00:22:20.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:20.023 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:20.023 Controller IO queue size 128, less than required. 00:22:20.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:20.023 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:20.023 WARNING: Some requested NVMe devices were skipped 00:22:20.023 13:05:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:22.559 Initializing NVMe Controllers 00:22:22.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.559 Controller IO queue size 128, less than required. 00:22:22.559 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.559 Controller IO queue size 128, less than required. 00:22:22.559 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:22.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:22.559 Initialization complete. Launching workers. 00:22:22.559 00:22:22.559 ==================== 00:22:22.559 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:22.559 TCP transport: 00:22:22.559 polls: 10889 00:22:22.559 idle_polls: 7665 00:22:22.559 sock_completions: 3224 00:22:22.559 nvme_completions: 6089 00:22:22.559 submitted_requests: 9094 00:22:22.559 queued_requests: 1 00:22:22.559 00:22:22.559 ==================== 00:22:22.559 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:22.559 TCP transport: 00:22:22.559 polls: 14858 00:22:22.559 idle_polls: 10960 00:22:22.559 sock_completions: 3898 00:22:22.559 nvme_completions: 6753 00:22:22.559 submitted_requests: 10152 00:22:22.559 queued_requests: 1 00:22:22.559 ======================================================== 00:22:22.559 Latency(us) 00:22:22.559 Device Information : IOPS MiB/s Average min max 00:22:22.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1519.65 379.91 86036.37 56808.52 137022.23 00:22:22.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1685.39 421.35 77100.33 46816.06 130462.24 00:22:22.559 ======================================================== 00:22:22.559 Total : 3205.04 801.26 81337.30 46816.06 137022.23 00:22:22.559 00:22:22.559 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:22.559 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:22.818 rmmod nvme_tcp 00:22:22.818 rmmod nvme_fabrics 00:22:22.818 rmmod nvme_keyring 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2408171 ']' 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2408171 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 2408171 ']' 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 2408171 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:22.818 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2408171 00:22:23.077 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:23.077 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:23.077 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2408171' 00:22:23.077 killing process with pid 2408171 00:22:23.077 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 2408171 00:22:23.077 13:05:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 2408171 00:22:24.456 13:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.456 13:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.456 13:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.456 13:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:24.456 13:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:24.456 13:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.456 13:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.456 13:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.456 13:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.456 13:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.456 13:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.456 13:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.996 00:22:26.996 real 0m24.286s 00:22:26.996 user 1m3.022s 00:22:26.996 sys 0m8.324s 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:26.996 ************************************ 00:22:26.996 END TEST nvmf_perf 00:22:26.996 ************************************ 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.996 ************************************ 00:22:26.996 START TEST nvmf_fio_host 00:22:26.996 ************************************ 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:26.996 * Looking for test storage... 00:22:26.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:26.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.996 --rc genhtml_branch_coverage=1 00:22:26.996 --rc genhtml_function_coverage=1 00:22:26.996 --rc genhtml_legend=1 00:22:26.996 --rc geninfo_all_blocks=1 00:22:26.996 --rc geninfo_unexecuted_blocks=1 00:22:26.996 00:22:26.996 ' 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:26.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.996 --rc genhtml_branch_coverage=1 00:22:26.996 --rc genhtml_function_coverage=1 00:22:26.996 --rc genhtml_legend=1 00:22:26.996 --rc geninfo_all_blocks=1 00:22:26.996 --rc geninfo_unexecuted_blocks=1 00:22:26.996 00:22:26.996 ' 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:26.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.996 --rc genhtml_branch_coverage=1 00:22:26.996 --rc genhtml_function_coverage=1 00:22:26.996 --rc genhtml_legend=1 00:22:26.996 --rc geninfo_all_blocks=1 00:22:26.996 --rc geninfo_unexecuted_blocks=1 00:22:26.996 00:22:26.996 ' 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:26.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.996 --rc genhtml_branch_coverage=1 00:22:26.996 --rc genhtml_function_coverage=1 00:22:26.996 --rc genhtml_legend=1 00:22:26.996 --rc geninfo_all_blocks=1 00:22:26.996 --rc geninfo_unexecuted_blocks=1 00:22:26.996 00:22:26.996 ' 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.996 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.997 13:05:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.570 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.570 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.570 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.570 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.570 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.570 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.570 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.570 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.570 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.570 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:33.570 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:33.571 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:33.571 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:33.571 Found net devices under 0000:86:00.0: cvl_0_0 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:33.571 Found net devices under 0000:86:00.1: cvl_0_1 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:33.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:22:33.571 00:22:33.571 --- 10.0.0.2 ping statistics --- 00:22:33.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.571 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:22:33.571 00:22:33.571 --- 10.0.0.1 ping statistics --- 00:22:33.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.571 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:33.571 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2414281 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2414281 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 2414281 ']' 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.572 [2024-11-18 13:05:30.397037] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:22:33.572 [2024-11-18 13:05:30.397093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.572 [2024-11-18 13:05:30.477485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:33.572 [2024-11-18 13:05:30.519718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.572 [2024-11-18 13:05:30.519756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.572 [2024-11-18 13:05:30.519764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.572 [2024-11-18 13:05:30.519771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.572 [2024-11-18 13:05:30.519777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.572 [2024-11-18 13:05:30.521428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.572 [2024-11-18 13:05:30.521460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.572 [2024-11-18 13:05:30.521492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.572 [2024-11-18 13:05:30.521493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:33.572 [2024-11-18 13:05:30.807780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.572 13:05:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:33.572 Malloc1 00:22:33.572 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:33.831 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:33.831 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:34.090 [2024-11-18 13:05:31.669687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.090 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:34.348 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:34.349 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:34.349 13:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:34.607 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:34.607 fio-3.35 00:22:34.607 Starting 1 thread 00:22:37.144 00:22:37.144 test: (groupid=0, jobs=1): err= 0: pid=2414867: Mon Nov 18 13:05:34 2024 00:22:37.144 read: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(91.1MiB/2005msec) 00:22:37.144 slat (nsec): min=1568, max=241942, avg=1700.63, stdev=2217.23 00:22:37.144 clat (usec): min=2738, max=10413, avg=6065.25, stdev=478.07 00:22:37.144 lat (usec): min=2766, max=10415, avg=6066.95, stdev=477.91 00:22:37.144 clat percentiles (usec): 00:22:37.144 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:22:37.144 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6194], 00:22:37.144 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:22:37.144 | 99.00th=[ 7111], 99.50th=[ 7242], 99.90th=[ 8848], 99.95th=[ 9765], 00:22:37.144 | 99.99th=[10421] 00:22:37.144 bw ( KiB/s): min=45424, max=47256, per=99.94%, avg=46484.00, stdev=776.69, samples=4 00:22:37.144 iops : min=11356, max=11814, avg=11621.00, stdev=194.17, samples=4 00:22:37.144 write: IOPS=11.5k, BW=45.1MiB/s (47.3MB/s)(90.4MiB/2005msec); 0 zone resets 00:22:37.144 slat (nsec): min=1600, max=156090, avg=1762.50, stdev=1225.62 00:22:37.144 clat (usec): min=2200, max=9743, avg=4917.24, stdev=381.70 00:22:37.144 lat (usec): min=2215, max=9745, avg=4919.00, stdev=381.58 00:22:37.144 clat percentiles (usec): 00:22:37.144 | 1.00th=[ 4047], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4621], 00:22:37.144 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5014], 00:22:37.144 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5342], 95.00th=[ 5473], 00:22:37.144 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 7242], 99.95th=[ 8455], 00:22:37.144 | 99.99th=[ 9110] 00:22:37.144 bw ( KiB/s): min=45728, max=46584, per=100.00%, avg=46180.00, stdev=382.69, samples=4 00:22:37.144 iops : min=11432, max=11646, avg=11545.00, stdev=95.67, samples=4 00:22:37.144 lat (msec) : 4=0.44%, 10=99.54%, 20=0.02% 00:22:37.144 cpu : usr=73.80%, sys=25.30%, ctx=110, majf=0, minf=3 00:22:37.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:37.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:37.144 issued rwts: total=23314,23145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:37.144 00:22:37.144 Run status group 0 (all jobs): 00:22:37.144 READ: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=91.1MiB (95.5MB), run=2005-2005msec 00:22:37.144 WRITE: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=90.4MiB (94.8MB), run=2005-2005msec 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:37.144 13:05:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:37.404 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:37.404 fio-3.35 00:22:37.404 Starting 1 thread 00:22:39.938 00:22:39.938 test: (groupid=0, jobs=1): err= 0: pid=2415441: Mon Nov 18 13:05:37 2024 00:22:39.938 read: IOPS=10.9k, BW=171MiB/s (179MB/s)(343MiB/2006msec) 00:22:39.938 slat (nsec): min=2609, max=88375, avg=2846.16, stdev=1213.81 00:22:39.938 clat (usec): min=1647, max=12950, avg=6757.29, stdev=1541.77 00:22:39.938 lat (usec): min=1650, max=12953, avg=6760.13, stdev=1541.85 00:22:39.938 clat percentiles (usec): 00:22:39.938 | 1.00th=[ 3654], 5.00th=[ 4359], 10.00th=[ 4752], 20.00th=[ 5407], 00:22:39.938 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6718], 60.00th=[ 7177], 00:22:39.938 | 70.00th=[ 7570], 80.00th=[ 7963], 90.00th=[ 8717], 95.00th=[ 9372], 00:22:39.938 | 99.00th=[10683], 99.50th=[10945], 99.90th=[11600], 99.95th=[11994], 00:22:39.938 | 99.99th=[12256] 00:22:39.938 bw ( KiB/s): min=82944, max=94112, per=50.20%, avg=87824.00, stdev=5614.79, samples=4 00:22:39.938 iops : min= 5184, max= 5882, avg=5489.00, stdev=350.92, samples=4 00:22:39.938 write: IOPS=6417, BW=100MiB/s (105MB/s)(180MiB/1793msec); 0 zone resets 00:22:39.938 slat (usec): min=29, max=348, avg=31.72, stdev= 6.30 00:22:39.938 clat (usec): min=4708, max=14031, avg=8670.60, stdev=1465.73 00:22:39.938 lat (usec): min=4738, max=14063, avg=8702.32, stdev=1466.49 00:22:39.938 clat percentiles (usec): 00:22:39.938 | 1.00th=[ 5866], 5.00th=[ 6587], 10.00th=[ 6915], 20.00th=[ 7439], 00:22:39.938 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:22:39.938 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11469], 00:22:39.938 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13435], 99.95th=[13829], 00:22:39.938 | 99.99th=[13960] 00:22:39.938 bw ( KiB/s): min=86272, max=98304, per=89.23%, avg=91616.00, stdev=5605.45, samples=4 00:22:39.938 iops : min= 5392, max= 6144, avg=5726.00, stdev=350.34, samples=4 00:22:39.938 lat (msec) : 2=0.03%, 4=1.50%, 10=90.41%, 20=8.07% 00:22:39.938 cpu : usr=86.03%, sys=13.27%, ctx=40, majf=0, minf=3 00:22:39.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:39.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:39.938 issued rwts: total=21932,11506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:39.938 00:22:39.938 Run status group 0 (all jobs): 00:22:39.938 READ: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=343MiB (359MB), run=2006-2006msec 00:22:39.938 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=180MiB (189MB), run=1793-1793msec 00:22:39.938 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:39.938 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:39.938 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:39.938 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:39.938 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:39.938 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:39.938 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:40.197 rmmod nvme_tcp 00:22:40.197 rmmod nvme_fabrics 00:22:40.197 rmmod nvme_keyring 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2414281 ']' 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2414281 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 2414281 ']' 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 2414281 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2414281 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2414281' 00:22:40.197 killing process with pid 2414281 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 2414281 00:22:40.197 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 2414281 00:22:40.457 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:40.457 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:40.457 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:40.457 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:40.457 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:40.457 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:40.457 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:40.457 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:40.457 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:40.457 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.457 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.457 13:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.363 13:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:42.363 00:22:42.363 real 0m15.872s 00:22:42.363 user 0m46.834s 00:22:42.363 sys 0m6.513s 00:22:42.363 13:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:42.363 13:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.363 ************************************ 00:22:42.363 END TEST nvmf_fio_host 00:22:42.363 ************************************ 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.624 ************************************ 00:22:42.624 START TEST nvmf_failover 00:22:42.624 ************************************ 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:42.624 * Looking for test storage... 00:22:42.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:42.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.624 --rc genhtml_branch_coverage=1 00:22:42.624 --rc genhtml_function_coverage=1 00:22:42.624 --rc genhtml_legend=1 00:22:42.624 --rc geninfo_all_blocks=1 00:22:42.624 --rc geninfo_unexecuted_blocks=1 00:22:42.624 00:22:42.624 ' 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:42.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.624 --rc genhtml_branch_coverage=1 00:22:42.624 --rc genhtml_function_coverage=1 00:22:42.624 --rc genhtml_legend=1 00:22:42.624 --rc geninfo_all_blocks=1 00:22:42.624 --rc geninfo_unexecuted_blocks=1 00:22:42.624 00:22:42.624 ' 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:42.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.624 --rc genhtml_branch_coverage=1 00:22:42.624 --rc genhtml_function_coverage=1 00:22:42.624 --rc genhtml_legend=1 00:22:42.624 --rc geninfo_all_blocks=1 00:22:42.624 --rc geninfo_unexecuted_blocks=1 00:22:42.624 00:22:42.624 ' 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:42.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.624 --rc genhtml_branch_coverage=1 00:22:42.624 --rc genhtml_function_coverage=1 00:22:42.624 --rc genhtml_legend=1 00:22:42.624 --rc geninfo_all_blocks=1 00:22:42.624 --rc geninfo_unexecuted_blocks=1 00:22:42.624 00:22:42.624 ' 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.624 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:42.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.625 13:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:49.201 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:49.201 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:49.201 Found net devices under 0000:86:00.0: cvl_0_0 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:49.201 Found net devices under 0000:86:00.1: cvl_0_1 00:22:49.201 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.202 13:05:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:22:49.202 00:22:49.202 --- 10.0.0.2 ping statistics --- 00:22:49.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.202 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:22:49.202 00:22:49.202 --- 10.0.0.1 ping statistics --- 00:22:49.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.202 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2419228 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2419228 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2419228 ']' 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:49.202 [2024-11-18 13:05:46.331450] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:22:49.202 [2024-11-18 13:05:46.331499] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.202 [2024-11-18 13:05:46.413998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:49.202 [2024-11-18 13:05:46.456173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.202 [2024-11-18 13:05:46.456211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.202 [2024-11-18 13:05:46.456217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.202 [2024-11-18 13:05:46.456223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.202 [2024-11-18 13:05:46.456229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.202 [2024-11-18 13:05:46.457601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.202 [2024-11-18 13:05:46.457712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.202 [2024-11-18 13:05:46.457713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:49.202 [2024-11-18 13:05:46.767109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.202 13:05:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:49.461 Malloc0 00:22:49.461 13:05:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:49.721 13:05:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:49.721 13:05:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.979 [2024-11-18 13:05:47.574003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.979 13:05:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:50.238 [2024-11-18 13:05:47.774546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:50.238 13:05:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:50.496 [2024-11-18 13:05:47.987264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:50.496 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2419667 00:22:50.497 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:50.497 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.497 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2419667 /var/tmp/bdevperf.sock 00:22:50.497 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2419667 ']' 00:22:50.497 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.497 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:50.497 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.497 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:50.497 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:50.755 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:50.755 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:22:50.755 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:51.015 NVMe0n1 00:22:51.015 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:51.274 00:22:51.274 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2419687 00:22:51.274 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:51.274 13:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:52.654 13:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.654 [2024-11-18 13:05:50.098300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.654 [2024-11-18 13:05:50.098634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 [2024-11-18 13:05:50.098756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab53d0 is same with the state(6) to be set 00:22:52.655 13:05:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:55.946 13:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:55.946 00:22:55.946 13:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:56.205 [2024-11-18 13:05:53.786834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab6220 is same with the state(6) to be set 00:22:56.205 [2024-11-18 13:05:53.786877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab6220 is same with the state(6) to be set 00:22:56.205 [2024-11-18 13:05:53.786885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab6220 is same with the state(6) to be set 00:22:56.205 [2024-11-18 13:05:53.786891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab6220 is same with the state(6) to be set 00:22:56.205 [2024-11-18 13:05:53.786897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab6220 is same with the state(6) to be set 00:22:56.205 [2024-11-18 13:05:53.786903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab6220 is same with the state(6) to be set 00:22:56.205 13:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:59.495 13:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.495 [2024-11-18 13:05:57.009064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.495 13:05:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:00.428 13:05:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:00.687 [2024-11-18 13:05:58.229192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7140 is same with the state(6) to be set 00:23:00.687 [2024-11-18 13:05:58.229237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7140 is same with the state(6) to be set 00:23:00.687 [2024-11-18 13:05:58.229245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7140 is same with the state(6) to be set 00:23:00.687 [2024-11-18 13:05:58.229252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7140 is same with the state(6) to be set 00:23:00.687 [2024-11-18 13:05:58.229258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7140 is same with the state(6) to be set 00:23:00.687 [2024-11-18 13:05:58.229265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7140 is same with the state(6) to be set 00:23:00.687 [2024-11-18 13:05:58.229271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7140 is same with the state(6) to be set 00:23:00.687 [2024-11-18 13:05:58.229277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7140 is same with the state(6) to be set 00:23:00.687 [2024-11-18 13:05:58.229283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7140 is same with the state(6) to be set 00:23:00.687 [2024-11-18 13:05:58.229294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7140 is same with the state(6) to be set 00:23:00.687 [2024-11-18 13:05:58.229300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7140 is same with the state(6) to be set 00:23:00.687 [2024-11-18 13:05:58.229306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7140 is same with the state(6) to be set 00:23:00.687 [2024-11-18 13:05:58.229313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7140 is same with the state(6) to be set 00:23:00.687 [2024-11-18 13:05:58.229319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7140 is same with the state(6) to be set 00:23:00.687 13:05:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2419687 00:23:07.266 { 00:23:07.266 "results": [ 00:23:07.266 { 00:23:07.266 "job": "NVMe0n1", 00:23:07.266 "core_mask": "0x1", 00:23:07.266 "workload": "verify", 00:23:07.266 "status": "finished", 00:23:07.266 "verify_range": { 00:23:07.266 "start": 0, 00:23:07.266 "length": 16384 00:23:07.266 }, 00:23:07.266 "queue_depth": 128, 00:23:07.266 "io_size": 4096, 00:23:07.266 "runtime": 15.003982, 00:23:07.266 "iops": 11012.876448398833, 00:23:07.266 "mibps": 43.01904862655794, 00:23:07.266 "io_failed": 3565, 00:23:07.267 "io_timeout": 0, 00:23:07.267 "avg_latency_us": 11355.055784276201, 00:23:07.267 "min_latency_us": 487.9582608695652, 00:23:07.267 "max_latency_us": 20971.52 00:23:07.267 } 00:23:07.267 ], 00:23:07.267 "core_count": 1 00:23:07.267 } 00:23:07.267 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2419667 00:23:07.267 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2419667 ']' 00:23:07.267 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2419667 00:23:07.267 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:07.267 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:07.267 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2419667 00:23:07.267 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:07.267 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:07.267 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2419667' 00:23:07.267 killing process with pid 2419667 00:23:07.267 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2419667 00:23:07.267 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2419667 00:23:07.267 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:07.267 [2024-11-18 13:05:48.062537] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:23:07.267 [2024-11-18 13:05:48.062590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419667 ] 00:23:07.267 [2024-11-18 13:05:48.129649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.267 [2024-11-18 13:05:48.171132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.267 Running I/O for 15 seconds... 00:23:07.267 11094.00 IOPS, 43.34 MiB/s [2024-11-18T12:06:04.969Z] [2024-11-18 13:05:50.100196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.267 [2024-11-18 13:05:50.100554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.267 [2024-11-18 13:05:50.100560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.268 [2024-11-18 13:05:50.100575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.268 [2024-11-18 13:05:50.100590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.268 [2024-11-18 13:05:50.100606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.268 [2024-11-18 13:05:50.100621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.268 [2024-11-18 13:05:50.100636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.268 [2024-11-18 13:05:50.100651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.268 [2024-11-18 13:05:50.100665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.268 [2024-11-18 13:05:50.100680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.268 [2024-11-18 13:05:50.100696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.268 [2024-11-18 13:05:50.100711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.268 [2024-11-18 13:05:50.100725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.100988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.100996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.101003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.101011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.268 [2024-11-18 13:05:50.101018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.268 [2024-11-18 13:05:50.101026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.269 [2024-11-18 13:05:50.101528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.269 [2024-11-18 13:05:50.101536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.270 [2024-11-18 13:05:50.101930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.270 [2024-11-18 13:05:50.101959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97624 len:8 PRP1 0x0 PRP2 0x0 00:23:07.270 [2024-11-18 13:05:50.101966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.101975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.270 [2024-11-18 13:05:50.101981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.270 [2024-11-18 13:05:50.101987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97632 len:8 PRP1 0x0 PRP2 0x0 00:23:07.270 [2024-11-18 13:05:50.101994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.102001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.270 [2024-11-18 13:05:50.102006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.270 [2024-11-18 13:05:50.102011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97640 len:8 PRP1 0x0 PRP2 0x0 00:23:07.270 [2024-11-18 13:05:50.102018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.270 [2024-11-18 13:05:50.102024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.270 [2024-11-18 13:05:50.102030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.270 [2024-11-18 13:05:50.102036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97648 len:8 PRP1 0x0 PRP2 0x0 00:23:07.271 [2024-11-18 13:05:50.102043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.102050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.271 [2024-11-18 13:05:50.102056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.271 [2024-11-18 13:05:50.102061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97656 len:8 PRP1 0x0 PRP2 0x0 00:23:07.271 [2024-11-18 13:05:50.102067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.102074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.271 [2024-11-18 13:05:50.102079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.271 [2024-11-18 13:05:50.102085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97664 len:8 PRP1 0x0 PRP2 0x0 00:23:07.271 [2024-11-18 13:05:50.102092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.102098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.271 [2024-11-18 13:05:50.102104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.271 [2024-11-18 13:05:50.102109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97672 len:8 PRP1 0x0 PRP2 0x0 00:23:07.271 [2024-11-18 13:05:50.102115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.102122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.271 [2024-11-18 13:05:50.102126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.271 [2024-11-18 13:05:50.102132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97680 len:8 PRP1 0x0 PRP2 0x0 00:23:07.271 [2024-11-18 13:05:50.102145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.102152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.271 [2024-11-18 13:05:50.102157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.271 [2024-11-18 13:05:50.102163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97688 len:8 PRP1 0x0 PRP2 0x0 00:23:07.271 [2024-11-18 13:05:50.102170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.102177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.271 [2024-11-18 13:05:50.102182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.271 [2024-11-18 13:05:50.102188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97696 len:8 PRP1 0x0 PRP2 0x0 00:23:07.271 [2024-11-18 13:05:50.102194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.102201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.271 [2024-11-18 13:05:50.102207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.271 [2024-11-18 13:05:50.102213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97704 len:8 PRP1 0x0 PRP2 0x0 00:23:07.271 [2024-11-18 13:05:50.102219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.102225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.271 [2024-11-18 13:05:50.102230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.271 [2024-11-18 13:05:50.102235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97712 len:8 PRP1 0x0 PRP2 0x0 00:23:07.271 [2024-11-18 13:05:50.102242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.102249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.271 [2024-11-18 13:05:50.102254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.271 [2024-11-18 13:05:50.102260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97720 len:8 PRP1 0x0 PRP2 0x0 00:23:07.271 [2024-11-18 13:05:50.102267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.102273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.271 [2024-11-18 13:05:50.102278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.271 [2024-11-18 13:05:50.102284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97728 len:8 PRP1 0x0 PRP2 0x0 00:23:07.271 [2024-11-18 13:05:50.102290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.113906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.271 [2024-11-18 13:05:50.113918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.271 [2024-11-18 13:05:50.113928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97736 len:8 PRP1 0x0 PRP2 0x0 00:23:07.271 [2024-11-18 13:05:50.113937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.113946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.271 [2024-11-18 13:05:50.113954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.271 [2024-11-18 13:05:50.113964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97744 len:8 PRP1 0x0 PRP2 0x0 00:23:07.271 [2024-11-18 13:05:50.113976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.114029] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:07.271 [2024-11-18 13:05:50.114058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.271 [2024-11-18 13:05:50.114071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.114082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.271 [2024-11-18 13:05:50.114091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.114101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.271 [2024-11-18 13:05:50.114111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.114121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.271 [2024-11-18 13:05:50.114131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:50.114141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:07.271 [2024-11-18 13:05:50.114184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b3340 (9): Bad file descriptor 00:23:07.271 [2024-11-18 13:05:50.118061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:07.271 [2024-11-18 13:05:50.145559] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:07.271 10848.00 IOPS, 42.38 MiB/s [2024-11-18T12:06:04.973Z] 10972.67 IOPS, 42.86 MiB/s [2024-11-18T12:06:04.973Z] 11005.50 IOPS, 42.99 MiB/s [2024-11-18T12:06:04.973Z] [2024-11-18 13:05:53.787071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.271 [2024-11-18 13:05:53.787109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:53.787123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.271 [2024-11-18 13:05:53.787132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:53.787141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.271 [2024-11-18 13:05:53.787149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:53.787157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.271 [2024-11-18 13:05:53.787164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:53.787172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.271 [2024-11-18 13:05:53.787179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:53.787187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.271 [2024-11-18 13:05:53.787199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:53.787208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.271 [2024-11-18 13:05:53.787215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:53.787223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.271 [2024-11-18 13:05:53.787230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.271 [2024-11-18 13:05:53.787238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.272 [2024-11-18 13:05:53.787533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.272 [2024-11-18 13:05:53.787548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.272 [2024-11-18 13:05:53.787562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.272 [2024-11-18 13:05:53.787570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.272 [2024-11-18 13:05:53.787577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.787989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.787998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.788005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.788015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.788024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.788032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.788039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.788047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.788053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.788063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.788071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.788080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.788088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.273 [2024-11-18 13:05:53.788096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.273 [2024-11-18 13:05:53.788104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.274 [2024-11-18 13:05:53.788637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.274 [2024-11-18 13:05:53.788645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.275 [2024-11-18 13:05:53.788975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-11-18 13:05:53.788990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.788998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-11-18 13:05:53.789005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.789013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-11-18 13:05:53.789020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.789027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-11-18 13:05:53.789034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.789043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-11-18 13:05:53.789050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.789059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-11-18 13:05:53.789066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.789085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.275 [2024-11-18 13:05:53.789092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.275 [2024-11-18 13:05:53.789100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37184 len:8 PRP1 0x0 PRP2 0x0 00:23:07.275 [2024-11-18 13:05:53.789106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.789149] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:07.275 [2024-11-18 13:05:53.789171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.275 [2024-11-18 13:05:53.789179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.275 [2024-11-18 13:05:53.789186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.276 [2024-11-18 13:05:53.789193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:53.789200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.276 [2024-11-18 13:05:53.789209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:53.789216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.276 [2024-11-18 13:05:53.789223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:53.789230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:07.276 [2024-11-18 13:05:53.792084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:07.276 [2024-11-18 13:05:53.792113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b3340 (9): Bad file descriptor 00:23:07.276 [2024-11-18 13:05:53.820777] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:07.276 10954.60 IOPS, 42.79 MiB/s [2024-11-18T12:06:04.978Z] 10975.67 IOPS, 42.87 MiB/s [2024-11-18T12:06:04.978Z] 10991.71 IOPS, 42.94 MiB/s [2024-11-18T12:06:04.978Z] 11006.88 IOPS, 43.00 MiB/s [2024-11-18T12:06:04.978Z] 11015.78 IOPS, 43.03 MiB/s [2024-11-18T12:06:04.978Z] [2024-11-18 13:05:58.231063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.276 [2024-11-18 13:05:58.231102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.276 [2024-11-18 13:05:58.231126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.276 [2024-11-18 13:05:58.231144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.276 [2024-11-18 13:05:58.231160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.276 [2024-11-18 13:05:58.231179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.276 [2024-11-18 13:05:58.231194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.276 [2024-11-18 13:05:58.231209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.276 [2024-11-18 13:05:58.231226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.276 [2024-11-18 13:05:58.231247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.276 [2024-11-18 13:05:58.231398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.276 [2024-11-18 13:05:58.231544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-11-18 13:05:58.231550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.231988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.231996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.232003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.277 [2024-11-18 13:05:58.232010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.277 [2024-11-18 13:05:58.232017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.278 [2024-11-18 13:05:58.232032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.278 [2024-11-18 13:05:58.232052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.278 [2024-11-18 13:05:58.232067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.278 [2024-11-18 13:05:58.232081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.278 [2024-11-18 13:05:58.232096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.278 [2024-11-18 13:05:58.232112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.278 [2024-11-18 13:05:58.232127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44480 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44488 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44496 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44504 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44512 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44520 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44528 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44536 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44544 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44552 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44560 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44568 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44576 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44584 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44592 len:8 PRP1 0x0 PRP2 0x0 00:23:07.278 [2024-11-18 13:05:58.232529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.278 [2024-11-18 13:05:58.232536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.278 [2024-11-18 13:05:58.232541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.278 [2024-11-18 13:05:58.232548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44600 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44608 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44616 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44624 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44632 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44640 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44648 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44656 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44664 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44672 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44680 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44688 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44696 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44704 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44712 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44720 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44728 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44736 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.232977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.232984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.232989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.232995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44744 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.233002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.233009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.233015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.233020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44752 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.233028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.233035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.233040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.233045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44760 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.233051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.233058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.233064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.233069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44768 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.233075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.233082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.279 [2024-11-18 13:05:58.233087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.279 [2024-11-18 13:05:58.233093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44776 len:8 PRP1 0x0 PRP2 0x0 00:23:07.279 [2024-11-18 13:05:58.233099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.279 [2024-11-18 13:05:58.233108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.233113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.233119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44784 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.233125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.233132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.233137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.233143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44792 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.233150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.233156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.233162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.233168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44800 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.233174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.233181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.233186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.233191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44808 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.233198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.233204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.233209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.233216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44816 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.233223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.233230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.233235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.233241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44824 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.233247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.233253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.233259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.233264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44832 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.233270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.233277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.233283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.243451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44840 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.243467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.243480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.243488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.243495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44848 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.243504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.243515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.243522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.243532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44856 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.243541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.243552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.243559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.243567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44864 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.243576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.243586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.243593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.243601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44872 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.243610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.243621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.243629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.243637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44880 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.243646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.243655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.243662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.243670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44888 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.243679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.243688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.243695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.243703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44896 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.243712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.243722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.243729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.243737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44904 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.243746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.243756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.243763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.243771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44912 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.243780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.243790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.243797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.243805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44920 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.243814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.243824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.243831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.243838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44928 len:8 PRP1 0x0 PRP2 0x0 00:23:07.280 [2024-11-18 13:05:58.243848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.280 [2024-11-18 13:05:58.243857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.280 [2024-11-18 13:05:58.243864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.280 [2024-11-18 13:05:58.243872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44936 len:8 PRP1 0x0 PRP2 0x0 00:23:07.281 [2024-11-18 13:05:58.243882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.281 [2024-11-18 13:05:58.243892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.281 [2024-11-18 13:05:58.243899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.281 [2024-11-18 13:05:58.243907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44944 len:8 PRP1 0x0 PRP2 0x0 00:23:07.281 [2024-11-18 13:05:58.243915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.281 [2024-11-18 13:05:58.243926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.281 [2024-11-18 13:05:58.243933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.281 [2024-11-18 13:05:58.243941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44952 len:8 PRP1 0x0 PRP2 0x0 00:23:07.281 [2024-11-18 13:05:58.243950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.281 [2024-11-18 13:05:58.243959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.281 [2024-11-18 13:05:58.243967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.281 [2024-11-18 13:05:58.243974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44960 len:8 PRP1 0x0 PRP2 0x0 00:23:07.281 [2024-11-18 13:05:58.243983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.281 [2024-11-18 13:05:58.243992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.281 [2024-11-18 13:05:58.244000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.281 [2024-11-18 13:05:58.244008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44968 len:8 PRP1 0x0 PRP2 0x0 00:23:07.281 [2024-11-18 13:05:58.244016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.281 [2024-11-18 13:05:58.244066] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:07.281 [2024-11-18 13:05:58.244095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.281 [2024-11-18 13:05:58.244106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.281 [2024-11-18 13:05:58.244116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.281 [2024-11-18 13:05:58.244125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.281 [2024-11-18 13:05:58.244136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.281 [2024-11-18 13:05:58.244146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.281 [2024-11-18 13:05:58.244156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.281 [2024-11-18 13:05:58.244165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.281 [2024-11-18 13:05:58.244174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:07.281 [2024-11-18 13:05:58.244203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b3340 (9): Bad file descriptor 00:23:07.281 [2024-11-18 13:05:58.248080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:07.281 [2024-11-18 13:05:58.276764] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:07.281 10971.60 IOPS, 42.86 MiB/s [2024-11-18T12:06:04.983Z] 10982.27 IOPS, 42.90 MiB/s [2024-11-18T12:06:04.983Z] 10988.67 IOPS, 42.92 MiB/s [2024-11-18T12:06:04.983Z] 11006.00 IOPS, 42.99 MiB/s [2024-11-18T12:06:04.983Z] 11009.07 IOPS, 43.00 MiB/s 00:23:07.281 Latency(us) 00:23:07.281 [2024-11-18T12:06:04.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.281 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:07.281 Verification LBA range: start 0x0 length 0x4000 00:23:07.281 NVMe0n1 : 15.00 11012.88 43.02 237.60 0.00 11355.06 487.96 20971.52 00:23:07.281 [2024-11-18T12:06:04.983Z] =================================================================================================================== 00:23:07.281 [2024-11-18T12:06:04.983Z] Total : 11012.88 43.02 237.60 0.00 11355.06 487.96 20971.52 00:23:07.281 Received shutdown signal, test time was about 15.000000 seconds 00:23:07.281 00:23:07.281 Latency(us) 00:23:07.281 [2024-11-18T12:06:04.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.281 [2024-11-18T12:06:04.983Z] =================================================================================================================== 00:23:07.281 [2024-11-18T12:06:04.983Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2422333 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2422333 /var/tmp/bdevperf.sock 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2422333 ']' 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:07.281 [2024-11-18 13:06:04.712607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:07.281 [2024-11-18 13:06:04.913214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:07.281 13:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:07.850 NVMe0n1 00:23:07.850 13:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:08.109 00:23:08.368 13:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:08.627 00:23:08.627 13:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:08.627 13:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:08.887 13:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:08.887 13:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:12.176 13:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.176 13:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:12.176 13:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2423646 00:23:12.176 13:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:12.176 13:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2423646 00:23:13.557 { 00:23:13.558 "results": [ 00:23:13.558 { 00:23:13.558 "job": "NVMe0n1", 00:23:13.558 "core_mask": "0x1", 00:23:13.558 "workload": "verify", 00:23:13.558 "status": "finished", 00:23:13.558 "verify_range": { 00:23:13.558 "start": 0, 00:23:13.558 "length": 16384 00:23:13.558 }, 00:23:13.558 "queue_depth": 128, 00:23:13.558 "io_size": 4096, 00:23:13.558 "runtime": 1.009234, 00:23:13.558 "iops": 10994.476999387654, 00:23:13.558 "mibps": 42.947175778858025, 00:23:13.558 "io_failed": 0, 00:23:13.558 "io_timeout": 0, 00:23:13.558 "avg_latency_us": 11600.6305708285, 00:23:13.558 "min_latency_us": 2464.7234782608693, 00:23:13.558 "max_latency_us": 12708.285217391305 00:23:13.558 } 00:23:13.558 ], 00:23:13.558 "core_count": 1 00:23:13.558 } 00:23:13.558 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:13.558 [2024-11-18 13:06:04.331697] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:23:13.558 [2024-11-18 13:06:04.331747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2422333 ] 00:23:13.558 [2024-11-18 13:06:04.405472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.558 [2024-11-18 13:06:04.443381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.558 [2024-11-18 13:06:06.519029] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:13.558 [2024-11-18 13:06:06.519074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.558 [2024-11-18 13:06:06.519086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.558 [2024-11-18 13:06:06.519094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.558 [2024-11-18 13:06:06.519102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.558 [2024-11-18 13:06:06.519110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.558 [2024-11-18 13:06:06.519117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.558 [2024-11-18 13:06:06.519125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.558 [2024-11-18 13:06:06.519132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.558 [2024-11-18 13:06:06.519138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:13.558 [2024-11-18 13:06:06.519163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:13.558 [2024-11-18 13:06:06.519178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b3340 (9): Bad file descriptor 00:23:13.558 [2024-11-18 13:06:06.539745] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:13.558 Running I/O for 1 seconds... 00:23:13.558 10967.00 IOPS, 42.84 MiB/s 00:23:13.558 Latency(us) 00:23:13.558 [2024-11-18T12:06:11.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.558 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:13.558 Verification LBA range: start 0x0 length 0x4000 00:23:13.558 NVMe0n1 : 1.01 10994.48 42.95 0.00 0.00 11600.63 2464.72 12708.29 00:23:13.558 [2024-11-18T12:06:11.260Z] =================================================================================================================== 00:23:13.558 [2024-11-18T12:06:11.260Z] Total : 10994.48 42.95 0.00 0.00 11600.63 2464.72 12708.29 00:23:13.558 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:13.558 13:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:13.558 13:06:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:13.817 13:06:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:13.817 13:06:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:13.817 13:06:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:14.076 13:06:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:17.369 13:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:17.369 13:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:17.370 13:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2422333 00:23:17.370 13:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2422333 ']' 00:23:17.370 13:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2422333 00:23:17.370 13:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:17.370 13:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:17.370 13:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2422333 00:23:17.370 13:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:17.370 13:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:17.370 13:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2422333' 00:23:17.370 killing process with pid 2422333 00:23:17.370 13:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2422333 00:23:17.370 13:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2422333 00:23:17.630 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:17.630 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:17.630 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:17.630 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:17.630 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:17.630 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:17.630 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:17.889 rmmod nvme_tcp 00:23:17.889 rmmod nvme_fabrics 00:23:17.889 rmmod nvme_keyring 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2419228 ']' 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2419228 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2419228 ']' 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2419228 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2419228 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2419228' 00:23:17.889 killing process with pid 2419228 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2419228 00:23:17.889 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2419228 00:23:18.149 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:18.149 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:18.149 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:18.149 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:18.149 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:18.149 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:18.149 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:18.149 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:18.149 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:18.149 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.149 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.149 13:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.052 13:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:20.052 00:23:20.052 real 0m37.603s 00:23:20.052 user 1m58.949s 00:23:20.052 sys 0m8.129s 00:23:20.052 13:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:20.052 13:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:20.052 ************************************ 00:23:20.052 END TEST nvmf_failover 00:23:20.052 ************************************ 00:23:20.052 13:06:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:20.052 13:06:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:20.052 13:06:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:20.052 13:06:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.313 ************************************ 00:23:20.313 START TEST nvmf_host_discovery 00:23:20.313 ************************************ 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:20.313 * Looking for test storage... 00:23:20.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.313 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:20.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.314 --rc genhtml_branch_coverage=1 00:23:20.314 --rc genhtml_function_coverage=1 00:23:20.314 --rc genhtml_legend=1 00:23:20.314 --rc geninfo_all_blocks=1 00:23:20.314 --rc geninfo_unexecuted_blocks=1 00:23:20.314 00:23:20.314 ' 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:20.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.314 --rc genhtml_branch_coverage=1 00:23:20.314 --rc genhtml_function_coverage=1 00:23:20.314 --rc genhtml_legend=1 00:23:20.314 --rc geninfo_all_blocks=1 00:23:20.314 --rc geninfo_unexecuted_blocks=1 00:23:20.314 00:23:20.314 ' 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:20.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.314 --rc genhtml_branch_coverage=1 00:23:20.314 --rc genhtml_function_coverage=1 00:23:20.314 --rc genhtml_legend=1 00:23:20.314 --rc geninfo_all_blocks=1 00:23:20.314 --rc geninfo_unexecuted_blocks=1 00:23:20.314 00:23:20.314 ' 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:20.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.314 --rc genhtml_branch_coverage=1 00:23:20.314 --rc genhtml_function_coverage=1 00:23:20.314 --rc genhtml_legend=1 00:23:20.314 --rc geninfo_all_blocks=1 00:23:20.314 --rc geninfo_unexecuted_blocks=1 00:23:20.314 00:23:20.314 ' 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:20.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:20.314 13:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:26.904 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.904 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:26.905 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:26.905 Found net devices under 0000:86:00.0: cvl_0_0 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:26.905 Found net devices under 0000:86:00.1: cvl_0_1 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:26.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:23:26.905 00:23:26.905 --- 10.0.0.2 ping statistics --- 00:23:26.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.905 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:23:26.905 00:23:26.905 --- 10.0.0.1 ping statistics --- 00:23:26.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.905 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2428097 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2428097 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 2428097 ']' 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:26.905 13:06:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.905 [2024-11-18 13:06:23.981307] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:23:26.905 [2024-11-18 13:06:23.981365] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.905 [2024-11-18 13:06:24.061321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.905 [2024-11-18 13:06:24.101702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.905 [2024-11-18 13:06:24.101737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.905 [2024-11-18 13:06:24.101744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.905 [2024-11-18 13:06:24.101749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.905 [2024-11-18 13:06:24.101755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.905 [2024-11-18 13:06:24.102310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.905 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:26.905 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:23:26.905 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:26.905 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:26.905 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.905 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.905 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.906 [2024-11-18 13:06:24.240872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.906 [2024-11-18 13:06:24.253055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.906 null0 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.906 null1 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2428120 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2428120 /tmp/host.sock 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 2428120 ']' 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:26.906 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.906 [2024-11-18 13:06:24.331941] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:23:26.906 [2024-11-18 13:06:24.331984] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2428120 ] 00:23:26.906 [2024-11-18 13:06:24.407482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.906 [2024-11-18 13:06:24.449917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.906 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:27.166 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.167 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.167 [2024-11-18 13:06:24.862608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.426 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.426 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:27.426 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.426 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.426 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.427 13:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:23:27.427 13:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:23:27.996 [2024-11-18 13:06:25.568562] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:27.996 [2024-11-18 13:06:25.568582] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:27.996 [2024-11-18 13:06:25.568593] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:27.996 [2024-11-18 13:06:25.654862] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:28.256 [2024-11-18 13:06:25.709366] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:28.256 [2024-11-18 13:06:25.710162] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1625ed0:1 started. 00:23:28.256 [2024-11-18 13:06:25.711549] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:28.256 [2024-11-18 13:06:25.711565] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:28.256 [2024-11-18 13:06:25.717373] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1625ed0 was disconnected and freed. delete nvme_qpair. 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:28.516 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:28.776 [2024-11-18 13:06:26.272064] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x16262a0:1 started. 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.776 [2024-11-18 13:06:26.278635] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x16262a0 was disconnected and freed. delete nvme_qpair. 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:28.776 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.777 [2024-11-18 13:06:26.378724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:28.777 [2024-11-18 13:06:26.379414] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:28.777 [2024-11-18 13:06:26.379433] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.777 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.777 [2024-11-18 13:06:26.465681] bdev_nvme.c:7306:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:29.038 13:06:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:23:29.038 [2024-11-18 13:06:26.524250] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:29.038 [2024-11-18 13:06:26.524283] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:29.038 [2024-11-18 13:06:26.524291] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:29.038 [2024-11-18 13:06:26.524295] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.978 [2024-11-18 13:06:27.618504] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:29.978 [2024-11-18 13:06:27.618524] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:29.978 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:29.978 [2024-11-18 13:06:27.624196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.978 [2024-11-18 13:06:27.624218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.978 [2024-11-18 13:06:27.624232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.978 [2024-11-18 13:06:27.624240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:29.979 [2024-11-18 13:06:27.624247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.979 [2024-11-18 13:06:27.624255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.979 [2024-11-18 13:06:27.624264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.979 [2024-11-18 13:06:27.624272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.979 [2024-11-18 13:06:27.624279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6490 is same with the state(6) to be set 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.979 [2024-11-18 13:06:27.634210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6490 (9): Bad file descriptor 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.979 [2024-11-18 13:06:27.644248] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:29.979 [2024-11-18 13:06:27.644262] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:29.979 [2024-11-18 13:06:27.644268] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:29.979 [2024-11-18 13:06:27.644276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:29.979 [2024-11-18 13:06:27.644293] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:29.979 [2024-11-18 13:06:27.644539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.979 [2024-11-18 13:06:27.644555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f6490 with addr=10.0.0.2, port=4420 00:23:29.979 [2024-11-18 13:06:27.644563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6490 is same with the state(6) to be set 00:23:29.979 [2024-11-18 13:06:27.644575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6490 (9): Bad file descriptor 00:23:29.979 [2024-11-18 13:06:27.644593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:29.979 [2024-11-18 13:06:27.644600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:29.979 [2024-11-18 13:06:27.644608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:29.979 [2024-11-18 13:06:27.644618] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:29.979 [2024-11-18 13:06:27.644624] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:29.979 [2024-11-18 13:06:27.644628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:29.979 [2024-11-18 13:06:27.654324] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:29.979 [2024-11-18 13:06:27.654335] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:29.979 [2024-11-18 13:06:27.654339] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:29.979 [2024-11-18 13:06:27.654343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:29.979 [2024-11-18 13:06:27.654361] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:29.979 [2024-11-18 13:06:27.654529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.979 [2024-11-18 13:06:27.654541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f6490 with addr=10.0.0.2, port=4420 00:23:29.979 [2024-11-18 13:06:27.654549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6490 is same with the state(6) to be set 00:23:29.979 [2024-11-18 13:06:27.654559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6490 (9): Bad file descriptor 00:23:29.979 [2024-11-18 13:06:27.654568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:29.979 [2024-11-18 13:06:27.654575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:29.979 [2024-11-18 13:06:27.654581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:29.979 [2024-11-18 13:06:27.654587] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:29.979 [2024-11-18 13:06:27.654591] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:29.979 [2024-11-18 13:06:27.654595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:29.979 [2024-11-18 13:06:27.664393] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:29.979 [2024-11-18 13:06:27.664407] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:29.979 [2024-11-18 13:06:27.664411] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:29.979 [2024-11-18 13:06:27.664415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:29.979 [2024-11-18 13:06:27.664430] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:29.979 [2024-11-18 13:06:27.664642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.979 [2024-11-18 13:06:27.664656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f6490 with addr=10.0.0.2, port=4420 00:23:29.979 [2024-11-18 13:06:27.664664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6490 is same with the state(6) to be set 00:23:29.979 [2024-11-18 13:06:27.664675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6490 (9): Bad file descriptor 00:23:29.979 [2024-11-18 13:06:27.664702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:29.979 [2024-11-18 13:06:27.664710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:29.979 [2024-11-18 13:06:27.664720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:29.979 [2024-11-18 13:06:27.664726] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:29.979 [2024-11-18 13:06:27.664731] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:29.979 [2024-11-18 13:06:27.664735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:29.979 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:29.979 [2024-11-18 13:06:27.674462] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:29.979 [2024-11-18 13:06:27.674474] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:29.979 [2024-11-18 13:06:27.674478] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:29.979 [2024-11-18 13:06:27.674482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:29.979 [2024-11-18 13:06:27.674495] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:29.979 [2024-11-18 13:06:27.674609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.979 [2024-11-18 13:06:27.674621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f6490 with addr=10.0.0.2, port=4420 00:23:29.979 [2024-11-18 13:06:27.674628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6490 is same with the state(6) to be set 00:23:29.979 [2024-11-18 13:06:27.674639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6490 (9): Bad file descriptor 00:23:29.979 [2024-11-18 13:06:27.674648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:29.979 [2024-11-18 13:06:27.674654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:29.979 [2024-11-18 13:06:27.674661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:29.979 [2024-11-18 13:06:27.674667] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:29.979 [2024-11-18 13:06:27.674671] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:29.979 [2024-11-18 13:06:27.674675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:30.240 [2024-11-18 13:06:27.684527] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:30.240 [2024-11-18 13:06:27.684540] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:30.240 [2024-11-18 13:06:27.684545] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:30.240 [2024-11-18 13:06:27.684549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:30.240 [2024-11-18 13:06:27.684563] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:30.240 [2024-11-18 13:06:27.684741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.240 [2024-11-18 13:06:27.684754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f6490 with addr=10.0.0.2, port=4420 00:23:30.240 [2024-11-18 13:06:27.684762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6490 is same with the state(6) to be set 00:23:30.240 [2024-11-18 13:06:27.684773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6490 (9): Bad file descriptor 00:23:30.240 [2024-11-18 13:06:27.684783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:30.240 [2024-11-18 13:06:27.684789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:30.240 [2024-11-18 13:06:27.684796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:30.240 [2024-11-18 13:06:27.684802] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:30.240 [2024-11-18 13:06:27.684806] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:30.240 [2024-11-18 13:06:27.684810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:30.240 [2024-11-18 13:06:27.694595] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:30.240 [2024-11-18 13:06:27.694606] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:30.240 [2024-11-18 13:06:27.694610] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:30.240 [2024-11-18 13:06:27.694614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:30.240 [2024-11-18 13:06:27.694628] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:30.240 [2024-11-18 13:06:27.694795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.240 [2024-11-18 13:06:27.694806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f6490 with addr=10.0.0.2, port=4420 00:23:30.240 [2024-11-18 13:06:27.694814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6490 is same with the state(6) to be set 00:23:30.240 [2024-11-18 13:06:27.694825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6490 (9): Bad file descriptor 00:23:30.240 [2024-11-18 13:06:27.694834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:30.240 [2024-11-18 13:06:27.694840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:30.240 [2024-11-18 13:06:27.694848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:30.240 [2024-11-18 13:06:27.694853] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:30.240 [2024-11-18 13:06:27.694858] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:30.240 [2024-11-18 13:06:27.694869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:30.240 [2024-11-18 13:06:27.704659] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:30.240 [2024-11-18 13:06:27.704669] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:30.240 [2024-11-18 13:06:27.704673] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:30.240 [2024-11-18 13:06:27.704677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:30.240 [2024-11-18 13:06:27.704690] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:30.240 [2024-11-18 13:06:27.704840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.240 [2024-11-18 13:06:27.704851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f6490 with addr=10.0.0.2, port=4420 00:23:30.240 [2024-11-18 13:06:27.704858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6490 is same with the state(6) to be set 00:23:30.240 [2024-11-18 13:06:27.704868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6490 (9): Bad file descriptor 00:23:30.240 [2024-11-18 13:06:27.704878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:30.240 [2024-11-18 13:06:27.704884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:30.240 [2024-11-18 13:06:27.704890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:30.240 [2024-11-18 13:06:27.704896] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:30.240 [2024-11-18 13:06:27.704900] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:30.240 [2024-11-18 13:06:27.704904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:30.240 [2024-11-18 13:06:27.705127] bdev_nvme.c:7169:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:30.240 [2024-11-18 13:06:27.705141] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:30.240 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:30.241 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:30.501 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:30.501 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:30.501 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.501 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.501 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.501 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:30.501 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:30.501 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:30.501 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:30.501 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:30.501 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.501 13:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.438 [2024-11-18 13:06:28.997221] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:31.439 [2024-11-18 13:06:28.997238] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:31.439 [2024-11-18 13:06:28.997249] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:31.439 [2024-11-18 13:06:29.084525] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:31.706 [2024-11-18 13:06:29.392892] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:31.707 [2024-11-18 13:06:29.393447] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x160d830:1 started. 00:23:31.707 [2024-11-18 13:06:29.395077] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:31.707 [2024-11-18 13:06:29.395103] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:31.707 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.707 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:31.707 [2024-11-18 13:06:29.396798] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x160d830 was disconnected and freed. delete nvme_qpair. 00:23:31.707 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:31.707 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:31.707 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:31.707 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.707 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:31.707 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.707 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:31.707 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.707 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.971 request: 00:23:31.971 { 00:23:31.971 "name": "nvme", 00:23:31.971 "trtype": "tcp", 00:23:31.971 "traddr": "10.0.0.2", 00:23:31.971 "adrfam": "ipv4", 00:23:31.971 "trsvcid": "8009", 00:23:31.971 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:31.971 "wait_for_attach": true, 00:23:31.971 "method": "bdev_nvme_start_discovery", 00:23:31.971 "req_id": 1 00:23:31.971 } 00:23:31.971 Got JSON-RPC error response 00:23:31.971 response: 00:23:31.971 { 00:23:31.971 "code": -17, 00:23:31.971 "message": "File exists" 00:23:31.971 } 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.971 request: 00:23:31.971 { 00:23:31.971 "name": "nvme_second", 00:23:31.971 "trtype": "tcp", 00:23:31.971 "traddr": "10.0.0.2", 00:23:31.971 "adrfam": "ipv4", 00:23:31.971 "trsvcid": "8009", 00:23:31.971 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:31.971 "wait_for_attach": true, 00:23:31.971 "method": "bdev_nvme_start_discovery", 00:23:31.971 "req_id": 1 00:23:31.971 } 00:23:31.971 Got JSON-RPC error response 00:23:31.971 response: 00:23:31.971 { 00:23:31.971 "code": -17, 00:23:31.971 "message": "File exists" 00:23:31.971 } 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:31.971 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.972 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.972 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:31.972 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:31.972 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:31.972 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:31.972 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:31.972 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.972 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:31.972 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.972 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:31.972 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.972 13:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.350 [2024-11-18 13:06:30.630476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.350 [2024-11-18 13:06:30.630513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1625820 with addr=10.0.0.2, port=8010 00:23:33.350 [2024-11-18 13:06:30.630531] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:33.350 [2024-11-18 13:06:30.630539] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:33.350 [2024-11-18 13:06:30.630546] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:34.287 [2024-11-18 13:06:31.632895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.287 [2024-11-18 13:06:31.632919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1625820 with addr=10.0.0.2, port=8010 00:23:34.287 [2024-11-18 13:06:31.632931] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:34.287 [2024-11-18 13:06:31.632937] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:34.287 [2024-11-18 13:06:31.632947] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:35.226 [2024-11-18 13:06:32.635146] bdev_nvme.c:7425:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:35.226 request: 00:23:35.226 { 00:23:35.226 "name": "nvme_second", 00:23:35.226 "trtype": "tcp", 00:23:35.226 "traddr": "10.0.0.2", 00:23:35.226 "adrfam": "ipv4", 00:23:35.226 "trsvcid": "8010", 00:23:35.226 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:35.226 "wait_for_attach": false, 00:23:35.226 "attach_timeout_ms": 3000, 00:23:35.226 "method": "bdev_nvme_start_discovery", 00:23:35.226 "req_id": 1 00:23:35.226 } 00:23:35.226 Got JSON-RPC error response 00:23:35.226 response: 00:23:35.226 { 00:23:35.226 "code": -110, 00:23:35.226 "message": "Connection timed out" 00:23:35.226 } 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2428120 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:35.226 rmmod nvme_tcp 00:23:35.226 rmmod nvme_fabrics 00:23:35.226 rmmod nvme_keyring 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2428097 ']' 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2428097 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 2428097 ']' 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 2428097 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2428097 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2428097' 00:23:35.226 killing process with pid 2428097 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 2428097 00:23:35.226 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 2428097 00:23:35.486 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:35.486 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:35.486 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:35.486 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:35.486 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:35.486 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:35.486 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:35.486 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:35.486 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:35.486 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.486 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.486 13:06:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.509 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:37.509 00:23:37.509 real 0m17.269s 00:23:37.509 user 0m20.597s 00:23:37.509 sys 0m5.859s 00:23:37.509 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:37.509 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.509 ************************************ 00:23:37.509 END TEST nvmf_host_discovery 00:23:37.509 ************************************ 00:23:37.509 13:06:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:37.509 13:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:37.509 13:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:37.509 13:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.509 ************************************ 00:23:37.509 START TEST nvmf_host_multipath_status 00:23:37.509 ************************************ 00:23:37.509 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:37.509 * Looking for test storage... 00:23:37.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:37.796 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:37.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.796 --rc genhtml_branch_coverage=1 00:23:37.796 --rc genhtml_function_coverage=1 00:23:37.796 --rc genhtml_legend=1 00:23:37.797 --rc geninfo_all_blocks=1 00:23:37.797 --rc geninfo_unexecuted_blocks=1 00:23:37.797 00:23:37.797 ' 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:37.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.797 --rc genhtml_branch_coverage=1 00:23:37.797 --rc genhtml_function_coverage=1 00:23:37.797 --rc genhtml_legend=1 00:23:37.797 --rc geninfo_all_blocks=1 00:23:37.797 --rc geninfo_unexecuted_blocks=1 00:23:37.797 00:23:37.797 ' 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:37.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.797 --rc genhtml_branch_coverage=1 00:23:37.797 --rc genhtml_function_coverage=1 00:23:37.797 --rc genhtml_legend=1 00:23:37.797 --rc geninfo_all_blocks=1 00:23:37.797 --rc geninfo_unexecuted_blocks=1 00:23:37.797 00:23:37.797 ' 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:37.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.797 --rc genhtml_branch_coverage=1 00:23:37.797 --rc genhtml_function_coverage=1 00:23:37.797 --rc genhtml_legend=1 00:23:37.797 --rc geninfo_all_blocks=1 00:23:37.797 --rc geninfo_unexecuted_blocks=1 00:23:37.797 00:23:37.797 ' 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:37.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:37.797 13:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:43.253 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.253 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:43.253 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:43.253 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:43.253 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:43.253 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:43.253 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:43.253 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:43.253 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:43.253 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:43.253 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:43.254 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:43.254 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:43.254 Found net devices under 0000:86:00.0: cvl_0_0 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:43.254 Found net devices under 0000:86:00.1: cvl_0_1 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:43.254 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.515 13:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:43.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:23:43.515 00:23:43.515 --- 10.0.0.2 ping statistics --- 00:23:43.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.515 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:23:43.515 00:23:43.515 --- 10.0.0.1 ping statistics --- 00:23:43.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.515 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:43.515 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:43.775 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:43.775 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:43.775 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:43.775 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:43.775 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2433210 00:23:43.775 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2433210 00:23:43.775 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:43.775 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2433210 ']' 00:23:43.775 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.775 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:43.775 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.775 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:43.775 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:43.775 [2024-11-18 13:06:41.275301] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:23:43.775 [2024-11-18 13:06:41.275360] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.775 [2024-11-18 13:06:41.354297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:43.775 [2024-11-18 13:06:41.396033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.775 [2024-11-18 13:06:41.396069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.775 [2024-11-18 13:06:41.396076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.775 [2024-11-18 13:06:41.396083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.775 [2024-11-18 13:06:41.396088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.775 [2024-11-18 13:06:41.397328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.776 [2024-11-18 13:06:41.397328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.036 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:44.036 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:23:44.036 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:44.036 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:44.036 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:44.036 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.036 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2433210 00:23:44.036 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:44.036 [2024-11-18 13:06:41.698253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.036 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:44.296 Malloc0 00:23:44.296 13:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:44.556 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:44.816 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:45.076 [2024-11-18 13:06:42.540988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.076 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:45.076 [2024-11-18 13:06:42.729421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:45.076 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:45.076 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2433463 00:23:45.076 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:45.076 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2433463 /var/tmp/bdevperf.sock 00:23:45.076 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2433463 ']' 00:23:45.076 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.076 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:45.076 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.076 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:45.076 13:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:45.336 13:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:45.336 13:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:23:45.336 13:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:45.596 13:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:46.165 Nvme0n1 00:23:46.165 13:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:46.425 Nvme0n1 00:23:46.684 13:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:46.684 13:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:48.596 13:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:48.596 13:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:48.856 13:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:49.116 13:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:50.055 13:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:50.055 13:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:50.055 13:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.055 13:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:50.315 13:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.315 13:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:50.315 13:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.315 13:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:50.315 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.315 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:50.315 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.315 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:50.574 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.574 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:50.574 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.574 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:50.834 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.834 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:50.834 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.834 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:51.094 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.094 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:51.094 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.094 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:51.354 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.354 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:51.354 13:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:51.614 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:51.614 13:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:53.058 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:53.058 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:53.058 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.058 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:53.058 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:53.058 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:53.058 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:53.058 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.058 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.058 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:53.058 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.058 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:53.318 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.318 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:53.318 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.318 13:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:53.577 13:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.577 13:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:53.577 13:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:53.577 13:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.837 13:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.837 13:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:53.837 13:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.837 13:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:54.097 13:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.097 13:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:54.097 13:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:54.356 13:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:54.356 13:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:55.735 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:55.735 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:55.735 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.735 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:55.735 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.735 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:55.735 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.735 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:55.995 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:55.995 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:55.995 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.995 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:55.995 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.995 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:55.995 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.995 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:56.254 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.254 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:56.254 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.254 13:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:56.513 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.514 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:56.514 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:56.514 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.773 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.773 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:56.773 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:57.032 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:57.032 13:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:58.412 13:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:58.412 13:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:58.412 13:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.412 13:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:58.412 13:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.412 13:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:58.412 13:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.412 13:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:58.672 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.672 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:58.672 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.672 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:58.672 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.672 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:58.672 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.672 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:58.931 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.931 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:58.931 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.931 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:59.190 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.190 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:59.190 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.190 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:59.449 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.449 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:59.449 13:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:59.708 13:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:59.708 13:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:01.088 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:01.088 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:01.088 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.088 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:01.088 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.088 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:01.088 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.088 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:01.348 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.348 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:01.348 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.348 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:01.348 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.348 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:01.348 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.348 13:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:01.608 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.608 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:01.608 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.608 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.866 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.866 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:01.866 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:01.866 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.126 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:02.126 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:02.126 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:02.126 13:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:02.385 13:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:03.765 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:03.765 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:03.765 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.765 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.765 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.765 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:03.765 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.765 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.765 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.765 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.765 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.765 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.025 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.025 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.025 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.025 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.284 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.284 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:04.284 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.284 13:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.543 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.543 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:04.543 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.543 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.803 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.803 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:05.062 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:05.062 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:05.062 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:05.321 13:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:06.261 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:06.261 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:06.261 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.261 13:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:06.521 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.521 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:06.521 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.521 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.780 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.780 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.780 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.780 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:07.040 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.040 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:07.040 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.040 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:07.300 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.300 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:07.300 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.300 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:07.300 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.300 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:07.300 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.559 13:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:07.559 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.559 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:07.559 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.818 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:08.077 13:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:09.016 13:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:09.016 13:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:09.016 13:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.016 13:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:09.276 13:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.276 13:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:09.276 13:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.276 13:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:09.536 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.537 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.537 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.537 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.797 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.797 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.797 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.797 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.797 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.797 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.797 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.797 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:10.056 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.056 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:10.056 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.056 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:10.315 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.315 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:10.315 13:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:10.575 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:10.835 13:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:11.773 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:11.773 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:11.773 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.773 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:12.033 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.033 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:12.033 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.033 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:12.033 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.033 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:12.033 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.033 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:12.293 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.293 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:12.293 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.293 13:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:12.552 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.553 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:12.553 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.553 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:12.812 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.812 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:12.812 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.812 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:13.073 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.073 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:13.073 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:13.333 13:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:13.333 13:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:14.714 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:14.714 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:14.714 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.714 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:14.714 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.714 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:14.714 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.714 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:14.973 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.973 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:14.973 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.973 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:15.232 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.232 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:15.232 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.232 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:15.232 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.232 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:15.232 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:15.232 13:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.490 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.490 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:15.490 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.490 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:15.749 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:15.749 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2433463 00:24:15.749 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2433463 ']' 00:24:15.749 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2433463 00:24:15.749 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:24:15.749 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:15.749 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2433463 00:24:15.749 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:15.749 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:15.750 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2433463' 00:24:15.750 killing process with pid 2433463 00:24:15.750 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2433463 00:24:15.750 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2433463 00:24:15.750 { 00:24:15.750 "results": [ 00:24:15.750 { 00:24:15.750 "job": "Nvme0n1", 00:24:15.750 "core_mask": "0x4", 00:24:15.750 "workload": "verify", 00:24:15.750 "status": "terminated", 00:24:15.750 "verify_range": { 00:24:15.750 "start": 0, 00:24:15.750 "length": 16384 00:24:15.750 }, 00:24:15.750 "queue_depth": 128, 00:24:15.750 "io_size": 4096, 00:24:15.750 "runtime": 29.060587, 00:24:15.750 "iops": 10547.481370558688, 00:24:15.750 "mibps": 41.20109910374487, 00:24:15.750 "io_failed": 0, 00:24:15.750 "io_timeout": 0, 00:24:15.750 "avg_latency_us": 12116.23681215024, 00:24:15.750 "min_latency_us": 135.34608695652173, 00:24:15.750 "max_latency_us": 3019898.88 00:24:15.750 } 00:24:15.750 ], 00:24:15.750 "core_count": 1 00:24:15.750 } 00:24:16.013 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2433463 00:24:16.013 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:16.013 [2024-11-18 13:06:42.806078] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:24:16.013 [2024-11-18 13:06:42.806131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433463 ] 00:24:16.013 [2024-11-18 13:06:42.879585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.013 [2024-11-18 13:06:42.920387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.013 Running I/O for 90 seconds... 00:24:16.013 11364.00 IOPS, 44.39 MiB/s [2024-11-18T12:07:13.715Z] 11379.00 IOPS, 44.45 MiB/s [2024-11-18T12:07:13.715Z] 11349.67 IOPS, 44.33 MiB/s [2024-11-18T12:07:13.715Z] 11329.00 IOPS, 44.25 MiB/s [2024-11-18T12:07:13.715Z] 11320.00 IOPS, 44.22 MiB/s [2024-11-18T12:07:13.715Z] 11262.00 IOPS, 43.99 MiB/s [2024-11-18T12:07:13.715Z] 11257.43 IOPS, 43.97 MiB/s [2024-11-18T12:07:13.715Z] 11263.75 IOPS, 44.00 MiB/s [2024-11-18T12:07:13.715Z] 11254.67 IOPS, 43.96 MiB/s [2024-11-18T12:07:13.715Z] 11264.80 IOPS, 44.00 MiB/s [2024-11-18T12:07:13.715Z] 11264.00 IOPS, 44.00 MiB/s [2024-11-18T12:07:13.715Z] 11263.58 IOPS, 44.00 MiB/s [2024-11-18T12:07:13.715Z] [2024-11-18 13:06:57.147240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:16.013 [2024-11-18 13:06:57.147693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.013 [2024-11-18 13:06:57.147700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.147712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.147719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.147732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.147740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.147753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.147759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.147772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.147778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.147791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.147798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.147810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.147817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.147830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.147837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.147850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.147856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.147869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.147875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.147888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.147894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.147906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.147913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.147925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.147932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-11-18 13:06:57.148560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-11-18 13:06:57.148581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-11-18 13:06:57.148602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-11-18 13:06:57.148623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-11-18 13:06:57.148644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-11-18 13:06:57.148667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-11-18 13:06:57.148688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.014 [2024-11-18 13:06:57.148902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:16.014 [2024-11-18 13:06:57.148917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.148925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.148940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.148946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.148961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.148968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.148982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.148989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.015 [2024-11-18 13:06:57.149674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.015 [2024-11-18 13:06:57.149861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:16.015 [2024-11-18 13:06:57.149877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.149885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.149902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.149909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.149926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.149933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.149949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.149957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.149973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.149980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.149996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:06:57.150516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.016 [2024-11-18 13:06:57.150541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.016 [2024-11-18 13:06:57.150567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.016 [2024-11-18 13:06:57.150592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.016 [2024-11-18 13:06:57.150617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.016 [2024-11-18 13:06:57.150642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.016 [2024-11-18 13:06:57.150667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:06:57.150685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.016 [2024-11-18 13:06:57.150692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:16.016 11162.62 IOPS, 43.60 MiB/s [2024-11-18T12:07:13.718Z] 10365.29 IOPS, 40.49 MiB/s [2024-11-18T12:07:13.718Z] 9674.27 IOPS, 37.79 MiB/s [2024-11-18T12:07:13.718Z] 9152.81 IOPS, 35.75 MiB/s [2024-11-18T12:07:13.718Z] 9275.59 IOPS, 36.23 MiB/s [2024-11-18T12:07:13.718Z] 9390.67 IOPS, 36.68 MiB/s [2024-11-18T12:07:13.718Z] 9547.21 IOPS, 37.29 MiB/s [2024-11-18T12:07:13.718Z] 9739.95 IOPS, 38.05 MiB/s [2024-11-18T12:07:13.718Z] 9920.43 IOPS, 38.75 MiB/s [2024-11-18T12:07:13.718Z] 9995.82 IOPS, 39.05 MiB/s [2024-11-18T12:07:13.718Z] 10048.43 IOPS, 39.25 MiB/s [2024-11-18T12:07:13.718Z] 10096.83 IOPS, 39.44 MiB/s [2024-11-18T12:07:13.718Z] 10239.72 IOPS, 40.00 MiB/s [2024-11-18T12:07:13.718Z] 10366.92 IOPS, 40.50 MiB/s [2024-11-18T12:07:13.718Z] [2024-11-18 13:07:11.003462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:07:11.003502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:07:11.003539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.016 [2024-11-18 13:07:11.003548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:07:11.003561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.016 [2024-11-18 13:07:11.003568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:07:11.003585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.016 [2024-11-18 13:07:11.003593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:16.016 [2024-11-18 13:07:11.006470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.017 [2024-11-18 13:07:11.006951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.017 [2024-11-18 13:07:11.006970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.006982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.006989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.007001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.007008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.007020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.007027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.007039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.007047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.007060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.007066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.007078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.007085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:16.017 [2024-11-18 13:07:11.007098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.017 [2024-11-18 13:07:11.007105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:16.017 10477.67 IOPS, 40.93 MiB/s [2024-11-18T12:07:13.719Z] 10512.29 IOPS, 41.06 MiB/s [2024-11-18T12:07:13.719Z] 10547.21 IOPS, 41.20 MiB/s [2024-11-18T12:07:13.719Z] Received shutdown signal, test time was about 29.061230 seconds 00:24:16.017 00:24:16.018 Latency(us) 00:24:16.018 [2024-11-18T12:07:13.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.018 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:16.018 Verification LBA range: start 0x0 length 0x4000 00:24:16.018 Nvme0n1 : 29.06 10547.48 41.20 0.00 0.00 12116.24 135.35 3019898.88 00:24:16.018 [2024-11-18T12:07:13.720Z] =================================================================================================================== 00:24:16.018 [2024-11-18T12:07:13.720Z] Total : 10547.48 41.20 0.00 0.00 12116.24 135.35 3019898.88 00:24:16.018 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:16.277 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:16.277 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:16.277 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:16.277 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:16.277 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:16.277 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:16.277 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:16.277 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:16.277 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.277 rmmod nvme_tcp 00:24:16.277 rmmod nvme_fabrics 00:24:16.277 rmmod nvme_keyring 00:24:16.277 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.277 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:16.277 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:16.277 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2433210 ']' 00:24:16.277 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2433210 00:24:16.278 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2433210 ']' 00:24:16.278 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2433210 00:24:16.278 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:24:16.278 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:16.278 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2433210 00:24:16.278 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:16.278 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:16.278 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2433210' 00:24:16.278 killing process with pid 2433210 00:24:16.278 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2433210 00:24:16.278 13:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2433210 00:24:16.537 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:16.537 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:16.537 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:16.537 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:16.537 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:16.537 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:16.537 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:16.537 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.537 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:16.537 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.537 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.537 13:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.443 13:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:18.443 00:24:18.443 real 0m41.007s 00:24:18.443 user 1m51.686s 00:24:18.443 sys 0m11.452s 00:24:18.443 13:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:18.443 13:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:18.443 ************************************ 00:24:18.443 END TEST nvmf_host_multipath_status 00:24:18.443 ************************************ 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.704 ************************************ 00:24:18.704 START TEST nvmf_discovery_remove_ifc 00:24:18.704 ************************************ 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:18.704 * Looking for test storage... 00:24:18.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:18.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.704 --rc genhtml_branch_coverage=1 00:24:18.704 --rc genhtml_function_coverage=1 00:24:18.704 --rc genhtml_legend=1 00:24:18.704 --rc geninfo_all_blocks=1 00:24:18.704 --rc geninfo_unexecuted_blocks=1 00:24:18.704 00:24:18.704 ' 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:18.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.704 --rc genhtml_branch_coverage=1 00:24:18.704 --rc genhtml_function_coverage=1 00:24:18.704 --rc genhtml_legend=1 00:24:18.704 --rc geninfo_all_blocks=1 00:24:18.704 --rc geninfo_unexecuted_blocks=1 00:24:18.704 00:24:18.704 ' 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:18.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.704 --rc genhtml_branch_coverage=1 00:24:18.704 --rc genhtml_function_coverage=1 00:24:18.704 --rc genhtml_legend=1 00:24:18.704 --rc geninfo_all_blocks=1 00:24:18.704 --rc geninfo_unexecuted_blocks=1 00:24:18.704 00:24:18.704 ' 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:18.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.704 --rc genhtml_branch_coverage=1 00:24:18.704 --rc genhtml_function_coverage=1 00:24:18.704 --rc genhtml_legend=1 00:24:18.704 --rc geninfo_all_blocks=1 00:24:18.704 --rc geninfo_unexecuted_blocks=1 00:24:18.704 00:24:18.704 ' 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:18.704 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.705 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.705 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.705 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.705 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.705 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:18.964 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:18.965 13:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:25.541 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:25.541 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:25.541 Found net devices under 0000:86:00.0: cvl_0_0 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:25.541 Found net devices under 0000:86:00.1: cvl_0_1 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.541 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:25.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:24:25.542 00:24:25.542 --- 10.0.0.2 ping statistics --- 00:24:25.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.542 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:24:25.542 00:24:25.542 --- 10.0.0.1 ping statistics --- 00:24:25.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.542 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2442115 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2442115 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 2442115 ']' 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.542 [2024-11-18 13:07:22.413198] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:24:25.542 [2024-11-18 13:07:22.413249] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.542 [2024-11-18 13:07:22.493563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.542 [2024-11-18 13:07:22.533231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.542 [2024-11-18 13:07:22.533269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.542 [2024-11-18 13:07:22.533277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.542 [2024-11-18 13:07:22.533282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.542 [2024-11-18 13:07:22.533288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.542 [2024-11-18 13:07:22.533874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.542 [2024-11-18 13:07:22.685020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.542 [2024-11-18 13:07:22.693216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:25.542 null0 00:24:25.542 [2024-11-18 13:07:22.725172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2442245 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2442245 /tmp/host.sock 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 2442245 ']' 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:25.542 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.542 [2024-11-18 13:07:22.794500] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:24:25.542 [2024-11-18 13:07:22.794546] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2442245 ] 00:24:25.542 [2024-11-18 13:07:22.867471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.542 [2024-11-18 13:07:22.910274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.542 13:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.542 13:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.543 13:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:25.543 13:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.543 13:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.482 [2024-11-18 13:07:24.091500] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:26.482 [2024-11-18 13:07:24.091520] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:26.482 [2024-11-18 13:07:24.091536] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:26.482 [2024-11-18 13:07:24.179805] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:26.741 [2024-11-18 13:07:24.282454] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:26.741 [2024-11-18 13:07:24.283222] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb6daf0:1 started. 00:24:26.741 [2024-11-18 13:07:24.284560] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:26.741 [2024-11-18 13:07:24.284597] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:26.741 [2024-11-18 13:07:24.284614] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:26.741 [2024-11-18 13:07:24.284626] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:26.741 [2024-11-18 13:07:24.284644] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:26.741 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.741 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:26.741 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:26.741 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.741 [2024-11-18 13:07:24.290468] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb6daf0 was disconnected and freed. delete nvme_qpair. 00:24:26.741 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:26.741 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.741 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:26.741 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.741 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:26.741 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.741 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:26.741 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:26.742 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:26.742 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:27.001 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:27.001 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.001 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:27.001 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.001 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:27.001 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.001 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:27.001 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.001 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:27.001 13:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:27.939 13:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:27.939 13:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.939 13:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:27.939 13:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.939 13:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:27.939 13:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.939 13:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:27.939 13:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.939 13:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:27.939 13:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:28.877 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:28.877 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.877 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:28.877 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.877 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:28.877 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:28.877 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:28.877 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.137 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:29.137 13:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:30.074 13:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:30.075 13:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.075 13:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:30.075 13:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.075 13:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:30.075 13:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.075 13:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:30.075 13:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.075 13:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:30.075 13:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:31.013 13:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:31.013 13:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.013 13:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:31.013 13:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.013 13:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:31.013 13:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.013 13:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:31.013 13:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.013 13:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:31.013 13:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:32.428 13:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:32.428 13:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.428 13:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:32.428 13:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.428 13:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:32.428 13:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.428 13:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:32.428 13:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.429 [2024-11-18 13:07:29.726175] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:32.429 [2024-11-18 13:07:29.726213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.429 [2024-11-18 13:07:29.726244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.429 [2024-11-18 13:07:29.726254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.429 [2024-11-18 13:07:29.726260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.429 [2024-11-18 13:07:29.726268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.429 [2024-11-18 13:07:29.726276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.429 [2024-11-18 13:07:29.726283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.429 [2024-11-18 13:07:29.726290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.429 [2024-11-18 13:07:29.726298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.429 [2024-11-18 13:07:29.726305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.429 [2024-11-18 13:07:29.726311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4a320 is same with the state(6) to be set 00:24:32.429 [2024-11-18 13:07:29.736196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4a320 (9): Bad file descriptor 00:24:32.429 13:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:32.429 13:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:32.429 [2024-11-18 13:07:29.746232] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:32.429 [2024-11-18 13:07:29.746244] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:32.429 [2024-11-18 13:07:29.746249] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:32.429 [2024-11-18 13:07:29.746253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:32.429 [2024-11-18 13:07:29.746273] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:33.362 13:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:33.362 13:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.362 13:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:33.363 13:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.363 13:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:33.363 13:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.363 13:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:33.363 [2024-11-18 13:07:30.752410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:33.363 [2024-11-18 13:07:30.752500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4a320 with addr=10.0.0.2, port=4420 00:24:33.363 [2024-11-18 13:07:30.752534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4a320 is same with the state(6) to be set 00:24:33.363 [2024-11-18 13:07:30.752593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4a320 (9): Bad file descriptor 00:24:33.363 [2024-11-18 13:07:30.753580] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:33.363 [2024-11-18 13:07:30.753645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:33.363 [2024-11-18 13:07:30.753669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:33.363 [2024-11-18 13:07:30.753693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:33.363 [2024-11-18 13:07:30.753715] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:33.363 [2024-11-18 13:07:30.753732] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:33.363 [2024-11-18 13:07:30.753746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:33.363 [2024-11-18 13:07:30.753767] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:33.363 [2024-11-18 13:07:30.753783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:33.363 13:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.363 13:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:33.363 13:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:34.301 [2024-11-18 13:07:31.756305] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:34.301 [2024-11-18 13:07:31.756327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:34.301 [2024-11-18 13:07:31.756339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:34.301 [2024-11-18 13:07:31.756345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:34.301 [2024-11-18 13:07:31.756357] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:34.301 [2024-11-18 13:07:31.756365] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:34.301 [2024-11-18 13:07:31.756370] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:34.301 [2024-11-18 13:07:31.756374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:34.301 [2024-11-18 13:07:31.756399] bdev_nvme.c:7133:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:34.301 [2024-11-18 13:07:31.756421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.301 [2024-11-18 13:07:31.756431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.301 [2024-11-18 13:07:31.756440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.301 [2024-11-18 13:07:31.756447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.301 [2024-11-18 13:07:31.756454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.301 [2024-11-18 13:07:31.756461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.301 [2024-11-18 13:07:31.756468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.301 [2024-11-18 13:07:31.756474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.301 [2024-11-18 13:07:31.756486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.301 [2024-11-18 13:07:31.756492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.301 [2024-11-18 13:07:31.756500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:34.301 [2024-11-18 13:07:31.756891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb39a00 (9): Bad file descriptor 00:24:34.301 [2024-11-18 13:07:31.757901] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:34.301 [2024-11-18 13:07:31.757911] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.301 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.302 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.302 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.302 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.302 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.302 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:34.302 13:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:35.680 13:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.680 13:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.680 13:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.680 13:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.680 13:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.680 13:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.680 13:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.680 13:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.680 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:35.680 13:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.247 [2024-11-18 13:07:33.806524] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:36.248 [2024-11-18 13:07:33.806542] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:36.248 [2024-11-18 13:07:33.806559] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:36.248 [2024-11-18 13:07:33.892813] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:36.507 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.507 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.507 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.507 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.507 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.507 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.507 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.507 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.507 [2024-11-18 13:07:34.069753] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:36.507 [2024-11-18 13:07:34.070397] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xb3e860:1 started. 00:24:36.507 [2024-11-18 13:07:34.071469] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:36.507 [2024-11-18 13:07:34.071499] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:36.507 [2024-11-18 13:07:34.071516] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:36.507 [2024-11-18 13:07:34.071530] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:36.507 [2024-11-18 13:07:34.071537] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:36.507 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:36.507 13:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.507 [2024-11-18 13:07:34.075600] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xb3e860 was disconnected and freed. delete nvme_qpair. 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2442245 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 2442245 ']' 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 2442245 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:37.445 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2442245 00:24:37.704 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:37.704 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:37.704 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2442245' 00:24:37.704 killing process with pid 2442245 00:24:37.704 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 2442245 00:24:37.704 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 2442245 00:24:37.704 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:37.704 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:37.705 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:37.705 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:37.705 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:37.705 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.705 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:37.705 rmmod nvme_tcp 00:24:37.705 rmmod nvme_fabrics 00:24:37.705 rmmod nvme_keyring 00:24:37.705 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:37.705 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:37.705 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:37.705 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2442115 ']' 00:24:37.705 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2442115 00:24:37.705 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 2442115 ']' 00:24:37.705 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 2442115 00:24:37.705 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2442115 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2442115' 00:24:37.964 killing process with pid 2442115 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 2442115 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 2442115 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.964 13:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.551 00:24:40.551 real 0m21.483s 00:24:40.551 user 0m26.738s 00:24:40.551 sys 0m5.901s 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.551 ************************************ 00:24:40.551 END TEST nvmf_discovery_remove_ifc 00:24:40.551 ************************************ 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.551 ************************************ 00:24:40.551 START TEST nvmf_identify_kernel_target 00:24:40.551 ************************************ 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:40.551 * Looking for test storage... 00:24:40.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:40.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.551 --rc genhtml_branch_coverage=1 00:24:40.551 --rc genhtml_function_coverage=1 00:24:40.551 --rc genhtml_legend=1 00:24:40.551 --rc geninfo_all_blocks=1 00:24:40.551 --rc geninfo_unexecuted_blocks=1 00:24:40.551 00:24:40.551 ' 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:40.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.551 --rc genhtml_branch_coverage=1 00:24:40.551 --rc genhtml_function_coverage=1 00:24:40.551 --rc genhtml_legend=1 00:24:40.551 --rc geninfo_all_blocks=1 00:24:40.551 --rc geninfo_unexecuted_blocks=1 00:24:40.551 00:24:40.551 ' 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:40.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.551 --rc genhtml_branch_coverage=1 00:24:40.551 --rc genhtml_function_coverage=1 00:24:40.551 --rc genhtml_legend=1 00:24:40.551 --rc geninfo_all_blocks=1 00:24:40.551 --rc geninfo_unexecuted_blocks=1 00:24:40.551 00:24:40.551 ' 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:40.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.551 --rc genhtml_branch_coverage=1 00:24:40.551 --rc genhtml_function_coverage=1 00:24:40.551 --rc genhtml_legend=1 00:24:40.551 --rc geninfo_all_blocks=1 00:24:40.551 --rc geninfo_unexecuted_blocks=1 00:24:40.551 00:24:40.551 ' 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.551 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:40.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:40.552 13:07:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:47.230 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:47.230 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:47.230 Found net devices under 0000:86:00.0: cvl_0_0 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:47.230 Found net devices under 0000:86:00.1: cvl_0_1 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.230 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:47.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:24:47.231 00:24:47.231 --- 10.0.0.2 ping statistics --- 00:24:47.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.231 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:24:47.231 00:24:47.231 --- 10.0.0.1 ping statistics --- 00:24:47.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.231 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:47.231 13:07:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:49.132 Waiting for block devices as requested 00:24:49.132 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:49.391 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:49.391 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:49.391 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:49.650 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:49.650 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:49.650 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:49.650 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:49.909 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:49.909 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:49.909 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:50.167 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:50.167 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:50.167 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:50.167 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:50.426 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:50.426 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:50.426 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:50.426 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:50.426 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:50.426 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:50.426 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:50.426 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:50.426 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:50.426 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:50.426 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:50.686 No valid GPT data, bailing 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:50.686 00:24:50.686 Discovery Log Number of Records 2, Generation counter 2 00:24:50.686 =====Discovery Log Entry 0====== 00:24:50.686 trtype: tcp 00:24:50.686 adrfam: ipv4 00:24:50.686 subtype: current discovery subsystem 00:24:50.686 treq: not specified, sq flow control disable supported 00:24:50.686 portid: 1 00:24:50.686 trsvcid: 4420 00:24:50.686 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:50.686 traddr: 10.0.0.1 00:24:50.686 eflags: none 00:24:50.686 sectype: none 00:24:50.686 =====Discovery Log Entry 1====== 00:24:50.686 trtype: tcp 00:24:50.686 adrfam: ipv4 00:24:50.686 subtype: nvme subsystem 00:24:50.686 treq: not specified, sq flow control disable supported 00:24:50.686 portid: 1 00:24:50.686 trsvcid: 4420 00:24:50.686 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:50.686 traddr: 10.0.0.1 00:24:50.686 eflags: none 00:24:50.686 sectype: none 00:24:50.686 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:50.686 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:50.686 ===================================================== 00:24:50.686 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:50.686 ===================================================== 00:24:50.686 Controller Capabilities/Features 00:24:50.686 ================================ 00:24:50.686 Vendor ID: 0000 00:24:50.686 Subsystem Vendor ID: 0000 00:24:50.686 Serial Number: c4840cfc11070da77ece 00:24:50.686 Model Number: Linux 00:24:50.686 Firmware Version: 6.8.9-20 00:24:50.686 Recommended Arb Burst: 0 00:24:50.686 IEEE OUI Identifier: 00 00 00 00:24:50.686 Multi-path I/O 00:24:50.686 May have multiple subsystem ports: No 00:24:50.686 May have multiple controllers: No 00:24:50.686 Associated with SR-IOV VF: No 00:24:50.686 Max Data Transfer Size: Unlimited 00:24:50.686 Max Number of Namespaces: 0 00:24:50.686 Max Number of I/O Queues: 1024 00:24:50.686 NVMe Specification Version (VS): 1.3 00:24:50.686 NVMe Specification Version (Identify): 1.3 00:24:50.686 Maximum Queue Entries: 1024 00:24:50.686 Contiguous Queues Required: No 00:24:50.686 Arbitration Mechanisms Supported 00:24:50.686 Weighted Round Robin: Not Supported 00:24:50.686 Vendor Specific: Not Supported 00:24:50.686 Reset Timeout: 7500 ms 00:24:50.686 Doorbell Stride: 4 bytes 00:24:50.686 NVM Subsystem Reset: Not Supported 00:24:50.686 Command Sets Supported 00:24:50.686 NVM Command Set: Supported 00:24:50.686 Boot Partition: Not Supported 00:24:50.686 Memory Page Size Minimum: 4096 bytes 00:24:50.686 Memory Page Size Maximum: 4096 bytes 00:24:50.686 Persistent Memory Region: Not Supported 00:24:50.686 Optional Asynchronous Events Supported 00:24:50.686 Namespace Attribute Notices: Not Supported 00:24:50.686 Firmware Activation Notices: Not Supported 00:24:50.686 ANA Change Notices: Not Supported 00:24:50.686 PLE Aggregate Log Change Notices: Not Supported 00:24:50.686 LBA Status Info Alert Notices: Not Supported 00:24:50.686 EGE Aggregate Log Change Notices: Not Supported 00:24:50.686 Normal NVM Subsystem Shutdown event: Not Supported 00:24:50.686 Zone Descriptor Change Notices: Not Supported 00:24:50.687 Discovery Log Change Notices: Supported 00:24:50.687 Controller Attributes 00:24:50.687 128-bit Host Identifier: Not Supported 00:24:50.687 Non-Operational Permissive Mode: Not Supported 00:24:50.687 NVM Sets: Not Supported 00:24:50.687 Read Recovery Levels: Not Supported 00:24:50.687 Endurance Groups: Not Supported 00:24:50.687 Predictable Latency Mode: Not Supported 00:24:50.687 Traffic Based Keep ALive: Not Supported 00:24:50.687 Namespace Granularity: Not Supported 00:24:50.687 SQ Associations: Not Supported 00:24:50.687 UUID List: Not Supported 00:24:50.687 Multi-Domain Subsystem: Not Supported 00:24:50.687 Fixed Capacity Management: Not Supported 00:24:50.687 Variable Capacity Management: Not Supported 00:24:50.687 Delete Endurance Group: Not Supported 00:24:50.687 Delete NVM Set: Not Supported 00:24:50.687 Extended LBA Formats Supported: Not Supported 00:24:50.687 Flexible Data Placement Supported: Not Supported 00:24:50.687 00:24:50.687 Controller Memory Buffer Support 00:24:50.687 ================================ 00:24:50.687 Supported: No 00:24:50.687 00:24:50.687 Persistent Memory Region Support 00:24:50.687 ================================ 00:24:50.687 Supported: No 00:24:50.687 00:24:50.687 Admin Command Set Attributes 00:24:50.687 ============================ 00:24:50.687 Security Send/Receive: Not Supported 00:24:50.687 Format NVM: Not Supported 00:24:50.687 Firmware Activate/Download: Not Supported 00:24:50.687 Namespace Management: Not Supported 00:24:50.687 Device Self-Test: Not Supported 00:24:50.687 Directives: Not Supported 00:24:50.687 NVMe-MI: Not Supported 00:24:50.687 Virtualization Management: Not Supported 00:24:50.687 Doorbell Buffer Config: Not Supported 00:24:50.687 Get LBA Status Capability: Not Supported 00:24:50.687 Command & Feature Lockdown Capability: Not Supported 00:24:50.687 Abort Command Limit: 1 00:24:50.687 Async Event Request Limit: 1 00:24:50.687 Number of Firmware Slots: N/A 00:24:50.687 Firmware Slot 1 Read-Only: N/A 00:24:50.687 Firmware Activation Without Reset: N/A 00:24:50.687 Multiple Update Detection Support: N/A 00:24:50.687 Firmware Update Granularity: No Information Provided 00:24:50.687 Per-Namespace SMART Log: No 00:24:50.687 Asymmetric Namespace Access Log Page: Not Supported 00:24:50.687 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:50.687 Command Effects Log Page: Not Supported 00:24:50.687 Get Log Page Extended Data: Supported 00:24:50.687 Telemetry Log Pages: Not Supported 00:24:50.687 Persistent Event Log Pages: Not Supported 00:24:50.687 Supported Log Pages Log Page: May Support 00:24:50.687 Commands Supported & Effects Log Page: Not Supported 00:24:50.687 Feature Identifiers & Effects Log Page:May Support 00:24:50.687 NVMe-MI Commands & Effects Log Page: May Support 00:24:50.687 Data Area 4 for Telemetry Log: Not Supported 00:24:50.687 Error Log Page Entries Supported: 1 00:24:50.687 Keep Alive: Not Supported 00:24:50.687 00:24:50.687 NVM Command Set Attributes 00:24:50.687 ========================== 00:24:50.687 Submission Queue Entry Size 00:24:50.687 Max: 1 00:24:50.687 Min: 1 00:24:50.687 Completion Queue Entry Size 00:24:50.687 Max: 1 00:24:50.687 Min: 1 00:24:50.687 Number of Namespaces: 0 00:24:50.687 Compare Command: Not Supported 00:24:50.687 Write Uncorrectable Command: Not Supported 00:24:50.687 Dataset Management Command: Not Supported 00:24:50.687 Write Zeroes Command: Not Supported 00:24:50.687 Set Features Save Field: Not Supported 00:24:50.687 Reservations: Not Supported 00:24:50.687 Timestamp: Not Supported 00:24:50.687 Copy: Not Supported 00:24:50.687 Volatile Write Cache: Not Present 00:24:50.687 Atomic Write Unit (Normal): 1 00:24:50.687 Atomic Write Unit (PFail): 1 00:24:50.687 Atomic Compare & Write Unit: 1 00:24:50.687 Fused Compare & Write: Not Supported 00:24:50.687 Scatter-Gather List 00:24:50.687 SGL Command Set: Supported 00:24:50.687 SGL Keyed: Not Supported 00:24:50.687 SGL Bit Bucket Descriptor: Not Supported 00:24:50.687 SGL Metadata Pointer: Not Supported 00:24:50.687 Oversized SGL: Not Supported 00:24:50.687 SGL Metadata Address: Not Supported 00:24:50.687 SGL Offset: Supported 00:24:50.687 Transport SGL Data Block: Not Supported 00:24:50.687 Replay Protected Memory Block: Not Supported 00:24:50.687 00:24:50.687 Firmware Slot Information 00:24:50.687 ========================= 00:24:50.687 Active slot: 0 00:24:50.687 00:24:50.687 00:24:50.687 Error Log 00:24:50.687 ========= 00:24:50.687 00:24:50.687 Active Namespaces 00:24:50.687 ================= 00:24:50.687 Discovery Log Page 00:24:50.687 ================== 00:24:50.687 Generation Counter: 2 00:24:50.687 Number of Records: 2 00:24:50.687 Record Format: 0 00:24:50.687 00:24:50.687 Discovery Log Entry 0 00:24:50.687 ---------------------- 00:24:50.687 Transport Type: 3 (TCP) 00:24:50.687 Address Family: 1 (IPv4) 00:24:50.687 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:50.687 Entry Flags: 00:24:50.687 Duplicate Returned Information: 0 00:24:50.687 Explicit Persistent Connection Support for Discovery: 0 00:24:50.687 Transport Requirements: 00:24:50.687 Secure Channel: Not Specified 00:24:50.687 Port ID: 1 (0x0001) 00:24:50.687 Controller ID: 65535 (0xffff) 00:24:50.687 Admin Max SQ Size: 32 00:24:50.687 Transport Service Identifier: 4420 00:24:50.687 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:50.687 Transport Address: 10.0.0.1 00:24:50.687 Discovery Log Entry 1 00:24:50.687 ---------------------- 00:24:50.687 Transport Type: 3 (TCP) 00:24:50.687 Address Family: 1 (IPv4) 00:24:50.687 Subsystem Type: 2 (NVM Subsystem) 00:24:50.687 Entry Flags: 00:24:50.687 Duplicate Returned Information: 0 00:24:50.687 Explicit Persistent Connection Support for Discovery: 0 00:24:50.687 Transport Requirements: 00:24:50.687 Secure Channel: Not Specified 00:24:50.687 Port ID: 1 (0x0001) 00:24:50.687 Controller ID: 65535 (0xffff) 00:24:50.687 Admin Max SQ Size: 32 00:24:50.687 Transport Service Identifier: 4420 00:24:50.687 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:50.687 Transport Address: 10.0.0.1 00:24:50.687 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:50.948 get_feature(0x01) failed 00:24:50.948 get_feature(0x02) failed 00:24:50.948 get_feature(0x04) failed 00:24:50.948 ===================================================== 00:24:50.948 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:50.948 ===================================================== 00:24:50.948 Controller Capabilities/Features 00:24:50.948 ================================ 00:24:50.948 Vendor ID: 0000 00:24:50.948 Subsystem Vendor ID: 0000 00:24:50.948 Serial Number: e9e8413cfe6876e68ccb 00:24:50.948 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:50.948 Firmware Version: 6.8.9-20 00:24:50.948 Recommended Arb Burst: 6 00:24:50.948 IEEE OUI Identifier: 00 00 00 00:24:50.948 Multi-path I/O 00:24:50.948 May have multiple subsystem ports: Yes 00:24:50.948 May have multiple controllers: Yes 00:24:50.948 Associated with SR-IOV VF: No 00:24:50.948 Max Data Transfer Size: Unlimited 00:24:50.948 Max Number of Namespaces: 1024 00:24:50.948 Max Number of I/O Queues: 128 00:24:50.948 NVMe Specification Version (VS): 1.3 00:24:50.948 NVMe Specification Version (Identify): 1.3 00:24:50.948 Maximum Queue Entries: 1024 00:24:50.948 Contiguous Queues Required: No 00:24:50.948 Arbitration Mechanisms Supported 00:24:50.948 Weighted Round Robin: Not Supported 00:24:50.948 Vendor Specific: Not Supported 00:24:50.948 Reset Timeout: 7500 ms 00:24:50.948 Doorbell Stride: 4 bytes 00:24:50.948 NVM Subsystem Reset: Not Supported 00:24:50.948 Command Sets Supported 00:24:50.948 NVM Command Set: Supported 00:24:50.948 Boot Partition: Not Supported 00:24:50.948 Memory Page Size Minimum: 4096 bytes 00:24:50.948 Memory Page Size Maximum: 4096 bytes 00:24:50.948 Persistent Memory Region: Not Supported 00:24:50.948 Optional Asynchronous Events Supported 00:24:50.948 Namespace Attribute Notices: Supported 00:24:50.948 Firmware Activation Notices: Not Supported 00:24:50.948 ANA Change Notices: Supported 00:24:50.948 PLE Aggregate Log Change Notices: Not Supported 00:24:50.948 LBA Status Info Alert Notices: Not Supported 00:24:50.948 EGE Aggregate Log Change Notices: Not Supported 00:24:50.948 Normal NVM Subsystem Shutdown event: Not Supported 00:24:50.948 Zone Descriptor Change Notices: Not Supported 00:24:50.948 Discovery Log Change Notices: Not Supported 00:24:50.948 Controller Attributes 00:24:50.948 128-bit Host Identifier: Supported 00:24:50.948 Non-Operational Permissive Mode: Not Supported 00:24:50.948 NVM Sets: Not Supported 00:24:50.948 Read Recovery Levels: Not Supported 00:24:50.948 Endurance Groups: Not Supported 00:24:50.948 Predictable Latency Mode: Not Supported 00:24:50.948 Traffic Based Keep ALive: Supported 00:24:50.948 Namespace Granularity: Not Supported 00:24:50.948 SQ Associations: Not Supported 00:24:50.948 UUID List: Not Supported 00:24:50.948 Multi-Domain Subsystem: Not Supported 00:24:50.948 Fixed Capacity Management: Not Supported 00:24:50.948 Variable Capacity Management: Not Supported 00:24:50.948 Delete Endurance Group: Not Supported 00:24:50.948 Delete NVM Set: Not Supported 00:24:50.948 Extended LBA Formats Supported: Not Supported 00:24:50.948 Flexible Data Placement Supported: Not Supported 00:24:50.948 00:24:50.948 Controller Memory Buffer Support 00:24:50.948 ================================ 00:24:50.948 Supported: No 00:24:50.948 00:24:50.948 Persistent Memory Region Support 00:24:50.948 ================================ 00:24:50.948 Supported: No 00:24:50.948 00:24:50.948 Admin Command Set Attributes 00:24:50.948 ============================ 00:24:50.948 Security Send/Receive: Not Supported 00:24:50.948 Format NVM: Not Supported 00:24:50.948 Firmware Activate/Download: Not Supported 00:24:50.948 Namespace Management: Not Supported 00:24:50.948 Device Self-Test: Not Supported 00:24:50.948 Directives: Not Supported 00:24:50.948 NVMe-MI: Not Supported 00:24:50.948 Virtualization Management: Not Supported 00:24:50.948 Doorbell Buffer Config: Not Supported 00:24:50.948 Get LBA Status Capability: Not Supported 00:24:50.948 Command & Feature Lockdown Capability: Not Supported 00:24:50.948 Abort Command Limit: 4 00:24:50.948 Async Event Request Limit: 4 00:24:50.948 Number of Firmware Slots: N/A 00:24:50.948 Firmware Slot 1 Read-Only: N/A 00:24:50.948 Firmware Activation Without Reset: N/A 00:24:50.948 Multiple Update Detection Support: N/A 00:24:50.948 Firmware Update Granularity: No Information Provided 00:24:50.948 Per-Namespace SMART Log: Yes 00:24:50.948 Asymmetric Namespace Access Log Page: Supported 00:24:50.948 ANA Transition Time : 10 sec 00:24:50.948 00:24:50.948 Asymmetric Namespace Access Capabilities 00:24:50.948 ANA Optimized State : Supported 00:24:50.948 ANA Non-Optimized State : Supported 00:24:50.948 ANA Inaccessible State : Supported 00:24:50.948 ANA Persistent Loss State : Supported 00:24:50.948 ANA Change State : Supported 00:24:50.948 ANAGRPID is not changed : No 00:24:50.948 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:50.948 00:24:50.948 ANA Group Identifier Maximum : 128 00:24:50.948 Number of ANA Group Identifiers : 128 00:24:50.948 Max Number of Allowed Namespaces : 1024 00:24:50.948 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:50.948 Command Effects Log Page: Supported 00:24:50.948 Get Log Page Extended Data: Supported 00:24:50.948 Telemetry Log Pages: Not Supported 00:24:50.948 Persistent Event Log Pages: Not Supported 00:24:50.948 Supported Log Pages Log Page: May Support 00:24:50.948 Commands Supported & Effects Log Page: Not Supported 00:24:50.949 Feature Identifiers & Effects Log Page:May Support 00:24:50.949 NVMe-MI Commands & Effects Log Page: May Support 00:24:50.949 Data Area 4 for Telemetry Log: Not Supported 00:24:50.949 Error Log Page Entries Supported: 128 00:24:50.949 Keep Alive: Supported 00:24:50.949 Keep Alive Granularity: 1000 ms 00:24:50.949 00:24:50.949 NVM Command Set Attributes 00:24:50.949 ========================== 00:24:50.949 Submission Queue Entry Size 00:24:50.949 Max: 64 00:24:50.949 Min: 64 00:24:50.949 Completion Queue Entry Size 00:24:50.949 Max: 16 00:24:50.949 Min: 16 00:24:50.949 Number of Namespaces: 1024 00:24:50.949 Compare Command: Not Supported 00:24:50.949 Write Uncorrectable Command: Not Supported 00:24:50.949 Dataset Management Command: Supported 00:24:50.949 Write Zeroes Command: Supported 00:24:50.949 Set Features Save Field: Not Supported 00:24:50.949 Reservations: Not Supported 00:24:50.949 Timestamp: Not Supported 00:24:50.949 Copy: Not Supported 00:24:50.949 Volatile Write Cache: Present 00:24:50.949 Atomic Write Unit (Normal): 1 00:24:50.949 Atomic Write Unit (PFail): 1 00:24:50.949 Atomic Compare & Write Unit: 1 00:24:50.949 Fused Compare & Write: Not Supported 00:24:50.949 Scatter-Gather List 00:24:50.949 SGL Command Set: Supported 00:24:50.949 SGL Keyed: Not Supported 00:24:50.949 SGL Bit Bucket Descriptor: Not Supported 00:24:50.949 SGL Metadata Pointer: Not Supported 00:24:50.949 Oversized SGL: Not Supported 00:24:50.949 SGL Metadata Address: Not Supported 00:24:50.949 SGL Offset: Supported 00:24:50.949 Transport SGL Data Block: Not Supported 00:24:50.949 Replay Protected Memory Block: Not Supported 00:24:50.949 00:24:50.949 Firmware Slot Information 00:24:50.949 ========================= 00:24:50.949 Active slot: 0 00:24:50.949 00:24:50.949 Asymmetric Namespace Access 00:24:50.949 =========================== 00:24:50.949 Change Count : 0 00:24:50.949 Number of ANA Group Descriptors : 1 00:24:50.949 ANA Group Descriptor : 0 00:24:50.949 ANA Group ID : 1 00:24:50.949 Number of NSID Values : 1 00:24:50.949 Change Count : 0 00:24:50.949 ANA State : 1 00:24:50.949 Namespace Identifier : 1 00:24:50.949 00:24:50.949 Commands Supported and Effects 00:24:50.949 ============================== 00:24:50.949 Admin Commands 00:24:50.949 -------------- 00:24:50.949 Get Log Page (02h): Supported 00:24:50.949 Identify (06h): Supported 00:24:50.949 Abort (08h): Supported 00:24:50.949 Set Features (09h): Supported 00:24:50.949 Get Features (0Ah): Supported 00:24:50.949 Asynchronous Event Request (0Ch): Supported 00:24:50.949 Keep Alive (18h): Supported 00:24:50.949 I/O Commands 00:24:50.949 ------------ 00:24:50.949 Flush (00h): Supported 00:24:50.949 Write (01h): Supported LBA-Change 00:24:50.949 Read (02h): Supported 00:24:50.949 Write Zeroes (08h): Supported LBA-Change 00:24:50.949 Dataset Management (09h): Supported 00:24:50.949 00:24:50.949 Error Log 00:24:50.949 ========= 00:24:50.949 Entry: 0 00:24:50.949 Error Count: 0x3 00:24:50.949 Submission Queue Id: 0x0 00:24:50.949 Command Id: 0x5 00:24:50.949 Phase Bit: 0 00:24:50.949 Status Code: 0x2 00:24:50.949 Status Code Type: 0x0 00:24:50.949 Do Not Retry: 1 00:24:50.949 Error Location: 0x28 00:24:50.949 LBA: 0x0 00:24:50.949 Namespace: 0x0 00:24:50.949 Vendor Log Page: 0x0 00:24:50.949 ----------- 00:24:50.949 Entry: 1 00:24:50.949 Error Count: 0x2 00:24:50.949 Submission Queue Id: 0x0 00:24:50.949 Command Id: 0x5 00:24:50.949 Phase Bit: 0 00:24:50.949 Status Code: 0x2 00:24:50.949 Status Code Type: 0x0 00:24:50.949 Do Not Retry: 1 00:24:50.949 Error Location: 0x28 00:24:50.949 LBA: 0x0 00:24:50.949 Namespace: 0x0 00:24:50.949 Vendor Log Page: 0x0 00:24:50.949 ----------- 00:24:50.949 Entry: 2 00:24:50.949 Error Count: 0x1 00:24:50.949 Submission Queue Id: 0x0 00:24:50.949 Command Id: 0x4 00:24:50.949 Phase Bit: 0 00:24:50.949 Status Code: 0x2 00:24:50.949 Status Code Type: 0x0 00:24:50.949 Do Not Retry: 1 00:24:50.949 Error Location: 0x28 00:24:50.949 LBA: 0x0 00:24:50.949 Namespace: 0x0 00:24:50.949 Vendor Log Page: 0x0 00:24:50.949 00:24:50.949 Number of Queues 00:24:50.949 ================ 00:24:50.949 Number of I/O Submission Queues: 128 00:24:50.949 Number of I/O Completion Queues: 128 00:24:50.949 00:24:50.949 ZNS Specific Controller Data 00:24:50.949 ============================ 00:24:50.949 Zone Append Size Limit: 0 00:24:50.949 00:24:50.949 00:24:50.949 Active Namespaces 00:24:50.949 ================= 00:24:50.949 get_feature(0x05) failed 00:24:50.949 Namespace ID:1 00:24:50.949 Command Set Identifier: NVM (00h) 00:24:50.949 Deallocate: Supported 00:24:50.949 Deallocated/Unwritten Error: Not Supported 00:24:50.949 Deallocated Read Value: Unknown 00:24:50.949 Deallocate in Write Zeroes: Not Supported 00:24:50.949 Deallocated Guard Field: 0xFFFF 00:24:50.949 Flush: Supported 00:24:50.949 Reservation: Not Supported 00:24:50.949 Namespace Sharing Capabilities: Multiple Controllers 00:24:50.949 Size (in LBAs): 1953525168 (931GiB) 00:24:50.949 Capacity (in LBAs): 1953525168 (931GiB) 00:24:50.949 Utilization (in LBAs): 1953525168 (931GiB) 00:24:50.949 UUID: 660f006d-f0e1-43a9-8872-d36e244e31f3 00:24:50.949 Thin Provisioning: Not Supported 00:24:50.949 Per-NS Atomic Units: Yes 00:24:50.949 Atomic Boundary Size (Normal): 0 00:24:50.949 Atomic Boundary Size (PFail): 0 00:24:50.949 Atomic Boundary Offset: 0 00:24:50.949 NGUID/EUI64 Never Reused: No 00:24:50.949 ANA group ID: 1 00:24:50.949 Namespace Write Protected: No 00:24:50.949 Number of LBA Formats: 1 00:24:50.949 Current LBA Format: LBA Format #00 00:24:50.949 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:50.949 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.949 rmmod nvme_tcp 00:24:50.949 rmmod nvme_fabrics 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.949 13:07:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.484 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.484 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:53.484 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:53.484 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:53.484 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:53.484 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:53.484 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:53.484 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:53.484 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:53.484 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:53.484 13:07:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:56.018 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:56.018 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:56.954 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:56.954 00:24:56.954 real 0m16.848s 00:24:56.954 user 0m4.366s 00:24:56.954 sys 0m8.848s 00:24:56.954 13:07:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:56.954 13:07:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.954 ************************************ 00:24:56.954 END TEST nvmf_identify_kernel_target 00:24:56.954 ************************************ 00:24:56.954 13:07:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:56.954 13:07:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:56.954 13:07:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:56.954 13:07:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.213 ************************************ 00:24:57.213 START TEST nvmf_auth_host 00:24:57.213 ************************************ 00:24:57.213 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:57.213 * Looking for test storage... 00:24:57.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.213 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:57.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.214 --rc genhtml_branch_coverage=1 00:24:57.214 --rc genhtml_function_coverage=1 00:24:57.214 --rc genhtml_legend=1 00:24:57.214 --rc geninfo_all_blocks=1 00:24:57.214 --rc geninfo_unexecuted_blocks=1 00:24:57.214 00:24:57.214 ' 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:57.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.214 --rc genhtml_branch_coverage=1 00:24:57.214 --rc genhtml_function_coverage=1 00:24:57.214 --rc genhtml_legend=1 00:24:57.214 --rc geninfo_all_blocks=1 00:24:57.214 --rc geninfo_unexecuted_blocks=1 00:24:57.214 00:24:57.214 ' 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:57.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.214 --rc genhtml_branch_coverage=1 00:24:57.214 --rc genhtml_function_coverage=1 00:24:57.214 --rc genhtml_legend=1 00:24:57.214 --rc geninfo_all_blocks=1 00:24:57.214 --rc geninfo_unexecuted_blocks=1 00:24:57.214 00:24:57.214 ' 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:57.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.214 --rc genhtml_branch_coverage=1 00:24:57.214 --rc genhtml_function_coverage=1 00:24:57.214 --rc genhtml_legend=1 00:24:57.214 --rc geninfo_all_blocks=1 00:24:57.214 --rc geninfo_unexecuted_blocks=1 00:24:57.214 00:24:57.214 ' 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.214 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.215 13:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:03.783 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:03.783 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:03.783 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:03.784 Found net devices under 0000:86:00.0: cvl_0_0 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:03.784 Found net devices under 0000:86:00.1: cvl_0_1 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:03.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:25:03.784 00:25:03.784 --- 10.0.0.2 ping statistics --- 00:25:03.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.784 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:25:03.784 00:25:03.784 --- 10.0.0.1 ping statistics --- 00:25:03.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.784 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2454264 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2454264 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2454264 ']' 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:03.784 13:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=497c87efa825f329f52d77b1527d2339 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DOO 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 497c87efa825f329f52d77b1527d2339 0 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 497c87efa825f329f52d77b1527d2339 0 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=497c87efa825f329f52d77b1527d2339 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DOO 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DOO 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.DOO 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=86f3d21d5cade7aa2176e32b29ab31a65f066d8fff68faf7f2efc019dcb231dd 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.04e 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 86f3d21d5cade7aa2176e32b29ab31a65f066d8fff68faf7f2efc019dcb231dd 3 00:25:03.784 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 86f3d21d5cade7aa2176e32b29ab31a65f066d8fff68faf7f2efc019dcb231dd 3 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=86f3d21d5cade7aa2176e32b29ab31a65f066d8fff68faf7f2efc019dcb231dd 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.04e 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.04e 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.04e 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9f13201ff1b2054464bc56243701e80fc510c8f2eb79a954 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5Qz 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9f13201ff1b2054464bc56243701e80fc510c8f2eb79a954 0 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9f13201ff1b2054464bc56243701e80fc510c8f2eb79a954 0 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9f13201ff1b2054464bc56243701e80fc510c8f2eb79a954 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5Qz 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5Qz 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.5Qz 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b2fa17c1da921e108f816446854b13423f8e0ce26f5549db 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.roy 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b2fa17c1da921e108f816446854b13423f8e0ce26f5549db 2 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b2fa17c1da921e108f816446854b13423f8e0ce26f5549db 2 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b2fa17c1da921e108f816446854b13423f8e0ce26f5549db 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.roy 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.roy 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.roy 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b7f389ed922124d8eec47e81558653f2 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.thb 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b7f389ed922124d8eec47e81558653f2 1 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b7f389ed922124d8eec47e81558653f2 1 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b7f389ed922124d8eec47e81558653f2 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.thb 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.thb 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.thb 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=965dc7655510f1db5d8694fbe330121e 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UoS 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 965dc7655510f1db5d8694fbe330121e 1 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 965dc7655510f1db5d8694fbe330121e 1 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=965dc7655510f1db5d8694fbe330121e 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:03.785 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UoS 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UoS 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.UoS 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=041f04704d898c7d012fa4240b07cb12a9366c0caf02570d 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5dN 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 041f04704d898c7d012fa4240b07cb12a9366c0caf02570d 2 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 041f04704d898c7d012fa4240b07cb12a9366c0caf02570d 2 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=041f04704d898c7d012fa4240b07cb12a9366c0caf02570d 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5dN 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5dN 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.5dN 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7b3a40a559f22b694d9ccd1189a3f664 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.xRm 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7b3a40a559f22b694d9ccd1189a3f664 0 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7b3a40a559f22b694d9ccd1189a3f664 0 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7b3a40a559f22b694d9ccd1189a3f664 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:04.044 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.xRm 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.xRm 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.xRm 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0a0bc9ea80f41d4ece69fed86519a05c73c14d1a97664a6d95cdce14a70cc186 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.D0l 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0a0bc9ea80f41d4ece69fed86519a05c73c14d1a97664a6d95cdce14a70cc186 3 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0a0bc9ea80f41d4ece69fed86519a05c73c14d1a97664a6d95cdce14a70cc186 3 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0a0bc9ea80f41d4ece69fed86519a05c73c14d1a97664a6d95cdce14a70cc186 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.D0l 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.D0l 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.D0l 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2454264 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2454264 ']' 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:04.045 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DOO 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.04e ]] 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.04e 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.5Qz 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.roy ]] 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.roy 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.thb 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.UoS ]] 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UoS 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.5dN 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.xRm ]] 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.xRm 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.303 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.D0l 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:04.304 13:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:07.594 Waiting for block devices as requested 00:25:07.594 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:07.594 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:07.594 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:07.594 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:07.594 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:07.594 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:07.594 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:07.594 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:07.594 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:07.852 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:07.852 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:07.852 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:08.110 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:08.110 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:08.110 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:08.110 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:08.369 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:08.936 No valid GPT data, bailing 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:08.936 00:25:08.936 Discovery Log Number of Records 2, Generation counter 2 00:25:08.936 =====Discovery Log Entry 0====== 00:25:08.936 trtype: tcp 00:25:08.936 adrfam: ipv4 00:25:08.936 subtype: current discovery subsystem 00:25:08.936 treq: not specified, sq flow control disable supported 00:25:08.936 portid: 1 00:25:08.936 trsvcid: 4420 00:25:08.936 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:08.936 traddr: 10.0.0.1 00:25:08.936 eflags: none 00:25:08.936 sectype: none 00:25:08.936 =====Discovery Log Entry 1====== 00:25:08.936 trtype: tcp 00:25:08.936 adrfam: ipv4 00:25:08.936 subtype: nvme subsystem 00:25:08.936 treq: not specified, sq flow control disable supported 00:25:08.936 portid: 1 00:25:08.936 trsvcid: 4420 00:25:08.936 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:08.936 traddr: 10.0.0.1 00:25:08.936 eflags: none 00:25:08.936 sectype: none 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:08.936 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.937 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.937 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.937 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.937 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.937 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.197 nvme0n1 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.197 13:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.458 nvme0n1 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:09.458 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.459 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.718 nvme0n1 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.718 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.977 nvme0n1 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.977 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.236 nvme0n1 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.236 nvme0n1 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.236 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.495 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:10.496 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:10.496 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.496 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:10.496 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.496 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.496 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.496 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.496 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.496 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.496 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.496 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.496 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.496 13:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.496 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.496 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.496 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.496 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.496 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.496 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.496 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.496 nvme0n1 00:25:10.496 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.496 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.496 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.496 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.496 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.496 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.755 nvme0n1 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.755 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.014 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.015 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.015 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.015 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.015 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.015 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.015 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.015 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.015 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.015 nvme0n1 00:25:11.015 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.015 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.015 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.015 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.015 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.015 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.274 nvme0n1 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.274 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.536 13:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.536 nvme0n1 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.536 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.537 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.537 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.537 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.537 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.537 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.537 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.537 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.795 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.054 nvme0n1 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.054 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.314 nvme0n1 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.314 13:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.573 nvme0n1 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.573 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.574 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:12.574 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.574 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.832 nvme0n1 00:25:12.832 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.832 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.832 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.832 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.832 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.832 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.090 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.090 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.090 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.090 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.090 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.090 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.090 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:13.090 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.091 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.350 nvme0n1 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.350 13:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.608 nvme0n1 00:25:13.608 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.609 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.609 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.609 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.609 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.609 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.868 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.127 nvme0n1 00:25:14.127 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.127 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.127 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.127 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.127 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.127 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.128 13:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.696 nvme0n1 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.696 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.956 nvme0n1 00:25:14.956 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.956 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.956 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.956 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.956 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.216 13:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.476 nvme0n1 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.476 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.046 nvme0n1 00:25:16.046 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.046 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.046 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.046 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.046 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.305 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.305 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.305 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.305 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.305 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.305 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.305 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.305 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.306 13:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.875 nvme0n1 00:25:16.875 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.875 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.875 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.875 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.875 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.875 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.875 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.875 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.875 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.875 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.876 13:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.445 nvme0n1 00:25:17.445 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.445 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.446 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.015 nvme0n1 00:25:18.015 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.015 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.274 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.275 13:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.844 nvme0n1 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.844 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.103 nvme0n1 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.103 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.362 nvme0n1 00:25:19.362 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.362 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.362 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.362 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.362 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.363 13:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.363 nvme0n1 00:25:19.363 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.363 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.363 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.363 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.363 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.363 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.622 nvme0n1 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.622 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.882 nvme0n1 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:19.882 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:19.883 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:19.883 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:19.883 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:19.883 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.883 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.883 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:19.883 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:19.883 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.883 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:19.883 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.883 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.142 nvme0n1 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.142 13:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.402 nvme0n1 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.402 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.661 nvme0n1 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.662 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.921 nvme0n1 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.921 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.181 nvme0n1 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.181 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.182 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.182 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.182 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.182 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.182 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.182 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.182 13:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.441 nvme0n1 00:25:21.441 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.441 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.441 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.441 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.441 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.441 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.708 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.708 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.708 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.708 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.708 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.709 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.969 nvme0n1 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:21.969 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.970 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.229 nvme0n1 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.229 13:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.489 nvme0n1 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.489 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.748 nvme0n1 00:25:22.748 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.748 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.748 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.748 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.748 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.748 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.007 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.267 nvme0n1 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:23.267 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.268 13:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.837 nvme0n1 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.837 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.096 nvme0n1 00:25:24.096 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.096 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.096 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.097 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.097 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.097 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.356 13:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.615 nvme0n1 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.615 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.874 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.874 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.875 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.875 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.875 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.875 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.875 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.875 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.875 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.875 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.875 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.875 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.875 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.134 nvme0n1 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.134 13:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.703 nvme0n1 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:25.703 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:25.704 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.704 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:25.704 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:25.704 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:25.704 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:25.704 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:25.704 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.704 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.704 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:25.704 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.704 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.704 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:25.704 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.704 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.963 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.532 nvme0n1 00:25:26.532 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.532 13:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.532 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.532 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.532 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.532 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.533 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.102 nvme0n1 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.102 13:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.671 nvme0n1 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.671 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.931 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.499 nvme0n1 00:25:28.499 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.499 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.499 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.499 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.499 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.499 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.500 13:08:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.500 nvme0n1 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.500 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.760 nvme0n1 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.760 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.019 nvme0n1 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.019 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.020 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.279 nvme0n1 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.279 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.280 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.280 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.280 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.280 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.280 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.280 13:08:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.539 nvme0n1 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.539 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.540 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.540 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.540 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.540 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.540 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.799 nvme0n1 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.799 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.800 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.800 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.800 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.800 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.800 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.800 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.800 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.800 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.800 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.800 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.800 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.800 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.059 nvme0n1 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.059 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.319 nvme0n1 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.319 13:08:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.579 nvme0n1 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.579 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.839 nvme0n1 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.839 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.098 nvme0n1 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.098 13:08:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.357 nvme0n1 00:25:31.357 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.357 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.357 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.357 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.357 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.357 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.357 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.357 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.357 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.357 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.634 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.898 nvme0n1 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.898 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.157 nvme0n1 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.157 13:08:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.417 nvme0n1 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.417 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.986 nvme0n1 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.986 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.245 nvme0n1 00:25:33.245 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.245 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.245 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.245 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.245 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.504 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.505 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.505 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.505 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.505 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.505 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.505 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.505 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:33.505 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.505 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.505 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.505 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.505 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:33.505 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:33.505 13:08:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.505 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.765 nvme0n1 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.765 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.334 nvme0n1 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.334 13:08:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.593 nvme0n1 00:25:34.593 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.593 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.593 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.593 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.593 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.593 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDk3Yzg3ZWZhODI1ZjMyOWY1MmQ3N2IxNTI3ZDIzMzll1GTT: 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: ]] 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODZmM2QyMWQ1Y2FkZTdhYTIxNzZlMzJiMjlhYjMxYTY1ZjA2NmQ4ZmZmNjhmYWY3ZjJlZmMwMTlkY2IyMzFkZIavRhw=: 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.853 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.422 nvme0n1 00:25:35.422 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.422 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.422 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.422 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.422 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.422 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.422 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.422 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.422 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.422 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.422 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.422 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.422 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:35.422 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.423 13:08:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.423 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.423 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.423 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.423 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.423 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.423 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.423 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.423 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.423 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.423 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.423 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.423 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.423 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.991 nvme0n1 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.991 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.992 13:08:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.558 nvme0n1 00:25:36.558 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.558 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.558 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.558 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.558 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.558 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDQxZjA0NzA0ZDg5OGM3ZDAxMmZhNDI0MGIwN2NiMTJhOTM2NmMwY2FmMDI1NzBk8LRPkg==: 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: ]] 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYTQwYTU1OWYyMmI2OTRkOWNjZDExODlhM2Y2NjTZYNAn: 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.817 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.818 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.386 nvme0n1 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGEwYmM5ZWE4MGY0MWQ0ZWNlNjlmZWQ4NjUxOWEwNWM3M2MxNGQxYTk3NjY0YTZkOTVjZGNlMTRhNzBjYzE4Nj4Xne8=: 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.386 13:08:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.954 nvme0n1 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.954 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.213 request: 00:25:38.213 { 00:25:38.213 "name": "nvme0", 00:25:38.213 "trtype": "tcp", 00:25:38.213 "traddr": "10.0.0.1", 00:25:38.213 "adrfam": "ipv4", 00:25:38.213 "trsvcid": "4420", 00:25:38.213 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:38.213 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:38.213 "prchk_reftag": false, 00:25:38.213 "prchk_guard": false, 00:25:38.213 "hdgst": false, 00:25:38.213 "ddgst": false, 00:25:38.213 "allow_unrecognized_csi": false, 00:25:38.213 "method": "bdev_nvme_attach_controller", 00:25:38.213 "req_id": 1 00:25:38.213 } 00:25:38.213 Got JSON-RPC error response 00:25:38.213 response: 00:25:38.213 { 00:25:38.213 "code": -5, 00:25:38.213 "message": "Input/output error" 00:25:38.213 } 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.213 request: 00:25:38.213 { 00:25:38.213 "name": "nvme0", 00:25:38.213 "trtype": "tcp", 00:25:38.213 "traddr": "10.0.0.1", 00:25:38.213 "adrfam": "ipv4", 00:25:38.213 "trsvcid": "4420", 00:25:38.213 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:38.213 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:38.213 "prchk_reftag": false, 00:25:38.213 "prchk_guard": false, 00:25:38.213 "hdgst": false, 00:25:38.213 "ddgst": false, 00:25:38.213 "dhchap_key": "key2", 00:25:38.213 "allow_unrecognized_csi": false, 00:25:38.213 "method": "bdev_nvme_attach_controller", 00:25:38.213 "req_id": 1 00:25:38.213 } 00:25:38.213 Got JSON-RPC error response 00:25:38.213 response: 00:25:38.213 { 00:25:38.213 "code": -5, 00:25:38.213 "message": "Input/output error" 00:25:38.213 } 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.213 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.214 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.473 request: 00:25:38.473 { 00:25:38.473 "name": "nvme0", 00:25:38.473 "trtype": "tcp", 00:25:38.473 "traddr": "10.0.0.1", 00:25:38.473 "adrfam": "ipv4", 00:25:38.473 "trsvcid": "4420", 00:25:38.473 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:38.473 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:38.473 "prchk_reftag": false, 00:25:38.473 "prchk_guard": false, 00:25:38.473 "hdgst": false, 00:25:38.473 "ddgst": false, 00:25:38.473 "dhchap_key": "key1", 00:25:38.473 "dhchap_ctrlr_key": "ckey2", 00:25:38.473 "allow_unrecognized_csi": false, 00:25:38.473 "method": "bdev_nvme_attach_controller", 00:25:38.473 "req_id": 1 00:25:38.473 } 00:25:38.473 Got JSON-RPC error response 00:25:38.473 response: 00:25:38.473 { 00:25:38.473 "code": -5, 00:25:38.473 "message": "Input/output error" 00:25:38.473 } 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.473 13:08:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.473 nvme0n1 00:25:38.473 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.473 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:38.473 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.473 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.473 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.473 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:38.473 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:38.473 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:38.473 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.473 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.473 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:38.473 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:38.473 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:38.473 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.474 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.474 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.474 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.474 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.474 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:38.474 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.474 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.474 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.732 request: 00:25:38.732 { 00:25:38.732 "name": "nvme0", 00:25:38.732 "dhchap_key": "key1", 00:25:38.732 "dhchap_ctrlr_key": "ckey2", 00:25:38.732 "method": "bdev_nvme_set_keys", 00:25:38.732 "req_id": 1 00:25:38.732 } 00:25:38.732 Got JSON-RPC error response 00:25:38.732 response: 00:25:38.732 { 00:25:38.732 "code": -13, 00:25:38.732 "message": "Permission denied" 00:25:38.732 } 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:38.732 13:08:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:39.670 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.670 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:39.670 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.670 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.670 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.670 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:39.670 13:08:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWYxMzIwMWZmMWIyMDU0NDY0YmM1NjI0MzcwMWU4MGZjNTEwYzhmMmViNzlhOTU0xx6ojQ==: 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: ]] 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjJmYTE3YzFkYTkyMWUxMDhmODE2NDQ2ODU0YjEzNDIzZjhlMGNlMjZmNTU0OWRieTMqpg==: 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.048 nvme0n1 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjdmMzg5ZWQ5MjIxMjRkOGVlYzQ3ZTgxNTU4NjUzZjLLb1Rw: 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: ]] 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY1ZGM3NjU1NTEwZjFkYjVkODY5NGZiZTMzMDEyMWUHE2vh: 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.048 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.048 request: 00:25:41.048 { 00:25:41.049 "name": "nvme0", 00:25:41.049 "dhchap_key": "key2", 00:25:41.049 "dhchap_ctrlr_key": "ckey1", 00:25:41.049 "method": "bdev_nvme_set_keys", 00:25:41.049 "req_id": 1 00:25:41.049 } 00:25:41.049 Got JSON-RPC error response 00:25:41.049 response: 00:25:41.049 { 00:25:41.049 "code": -13, 00:25:41.049 "message": "Permission denied" 00:25:41.049 } 00:25:41.049 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:41.049 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:41.049 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:41.049 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:41.049 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:41.049 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:41.049 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.049 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.049 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.049 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.049 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:41.049 13:08:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:41.984 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.984 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:41.984 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.984 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.242 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.242 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:42.242 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:42.242 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:42.243 rmmod nvme_tcp 00:25:42.243 rmmod nvme_fabrics 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2454264 ']' 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2454264 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 2454264 ']' 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 2454264 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2454264 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2454264' 00:25:42.243 killing process with pid 2454264 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 2454264 00:25:42.243 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 2454264 00:25:42.502 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:42.502 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:42.502 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:42.502 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:42.502 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:42.502 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:42.502 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:42.502 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:42.502 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:42.502 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.502 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.502 13:08:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.409 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:44.409 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:44.410 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:44.410 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:44.410 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:44.410 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:44.410 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:44.410 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:44.410 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:44.410 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:44.410 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:44.410 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:44.410 13:08:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:47.700 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:47.700 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:47.700 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:47.700 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:47.700 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:47.700 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:47.700 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:47.700 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:47.700 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:47.700 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:47.701 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:47.701 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:47.701 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:47.701 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:47.701 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:47.701 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:48.270 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:48.529 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.DOO /tmp/spdk.key-null.5Qz /tmp/spdk.key-sha256.thb /tmp/spdk.key-sha384.5dN /tmp/spdk.key-sha512.D0l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:48.529 13:08:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:51.067 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:51.067 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:51.067 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:51.067 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:51.067 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:51.067 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:51.067 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:51.067 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:51.067 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:51.327 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:51.327 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:51.327 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:51.327 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:51.327 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:51.327 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:51.327 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:51.327 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:51.327 00:25:51.327 real 0m54.252s 00:25:51.327 user 0m48.900s 00:25:51.327 sys 0m12.722s 00:25:51.327 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:51.327 13:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.327 ************************************ 00:25:51.327 END TEST nvmf_auth_host 00:25:51.327 ************************************ 00:25:51.327 13:08:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:51.327 13:08:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:51.327 13:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:51.327 13:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:51.327 13:08:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.327 ************************************ 00:25:51.327 START TEST nvmf_digest 00:25:51.327 ************************************ 00:25:51.327 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:51.589 * Looking for test storage... 00:25:51.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:51.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.589 --rc genhtml_branch_coverage=1 00:25:51.589 --rc genhtml_function_coverage=1 00:25:51.589 --rc genhtml_legend=1 00:25:51.589 --rc geninfo_all_blocks=1 00:25:51.589 --rc geninfo_unexecuted_blocks=1 00:25:51.589 00:25:51.589 ' 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:51.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.589 --rc genhtml_branch_coverage=1 00:25:51.589 --rc genhtml_function_coverage=1 00:25:51.589 --rc genhtml_legend=1 00:25:51.589 --rc geninfo_all_blocks=1 00:25:51.589 --rc geninfo_unexecuted_blocks=1 00:25:51.589 00:25:51.589 ' 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:51.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.589 --rc genhtml_branch_coverage=1 00:25:51.589 --rc genhtml_function_coverage=1 00:25:51.589 --rc genhtml_legend=1 00:25:51.589 --rc geninfo_all_blocks=1 00:25:51.589 --rc geninfo_unexecuted_blocks=1 00:25:51.589 00:25:51.589 ' 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:51.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.589 --rc genhtml_branch_coverage=1 00:25:51.589 --rc genhtml_function_coverage=1 00:25:51.589 --rc genhtml_legend=1 00:25:51.589 --rc geninfo_all_blocks=1 00:25:51.589 --rc geninfo_unexecuted_blocks=1 00:25:51.589 00:25:51.589 ' 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.589 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:51.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:51.590 13:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:57.106 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:57.106 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:57.106 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:57.107 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:57.107 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.107 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.107 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.107 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.107 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:57.366 Found net devices under 0000:86:00.0: cvl_0_0 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:57.366 Found net devices under 0000:86:00.1: cvl_0_1 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.366 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:57.367 13:08:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.367 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.367 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.367 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:57.367 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:57.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:25:57.367 00:25:57.367 --- 10.0.0.2 ping statistics --- 00:25:57.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.367 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:25:57.367 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:25:57.367 00:25:57.367 --- 10.0.0.1 ping statistics --- 00:25:57.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.367 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:57.627 ************************************ 00:25:57.627 START TEST nvmf_digest_clean 00:25:57.627 ************************************ 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2468027 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2468027 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2468027 ']' 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:57.627 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:57.627 [2024-11-18 13:08:55.195157] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:25:57.627 [2024-11-18 13:08:55.195205] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.627 [2024-11-18 13:08:55.275432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.627 [2024-11-18 13:08:55.316939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.627 [2024-11-18 13:08:55.316976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.627 [2024-11-18 13:08:55.316983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.627 [2024-11-18 13:08:55.316989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.627 [2024-11-18 13:08:55.316995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.627 [2024-11-18 13:08:55.317588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:57.886 null0 00:25:57.886 [2024-11-18 13:08:55.469588] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.886 [2024-11-18 13:08:55.493785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2468060 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2468060 /var/tmp/bperf.sock 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2468060 ']' 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:57.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:57.886 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:57.886 [2024-11-18 13:08:55.545515] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:25:57.886 [2024-11-18 13:08:55.545555] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468060 ] 00:25:58.145 [2024-11-18 13:08:55.621221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.145 [2024-11-18 13:08:55.664002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.145 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:58.145 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:58.145 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:58.145 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:58.145 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:58.403 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:58.403 13:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:58.662 nvme0n1 00:25:58.662 13:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:58.662 13:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:58.662 Running I/O for 2 seconds... 00:26:00.978 24805.00 IOPS, 96.89 MiB/s [2024-11-18T12:08:58.680Z] 25108.50 IOPS, 98.08 MiB/s 00:26:00.978 Latency(us) 00:26:00.978 [2024-11-18T12:08:58.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.978 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:00.978 nvme0n1 : 2.04 24620.10 96.17 0.00 0.00 5090.99 2293.76 46957.97 00:26:00.978 [2024-11-18T12:08:58.680Z] =================================================================================================================== 00:26:00.978 [2024-11-18T12:08:58.680Z] Total : 24620.10 96.17 0.00 0.00 5090.99 2293.76 46957.97 00:26:00.978 { 00:26:00.978 "results": [ 00:26:00.978 { 00:26:00.978 "job": "nvme0n1", 00:26:00.978 "core_mask": "0x2", 00:26:00.978 "workload": "randread", 00:26:00.978 "status": "finished", 00:26:00.978 "queue_depth": 128, 00:26:00.978 "io_size": 4096, 00:26:00.978 "runtime": 2.044874, 00:26:00.978 "iops": 24620.098842275856, 00:26:00.978 "mibps": 96.17226110264006, 00:26:00.978 "io_failed": 0, 00:26:00.978 "io_timeout": 0, 00:26:00.978 "avg_latency_us": 5090.9908038706835, 00:26:00.978 "min_latency_us": 2293.76, 00:26:00.978 "max_latency_us": 46957.96869565218 00:26:00.978 } 00:26:00.978 ], 00:26:00.978 "core_count": 1 00:26:00.978 } 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:00.978 | select(.opcode=="crc32c") 00:26:00.978 | "\(.module_name) \(.executed)"' 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2468060 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2468060 ']' 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2468060 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2468060 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2468060' 00:26:00.978 killing process with pid 2468060 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2468060 00:26:00.978 Received shutdown signal, test time was about 2.000000 seconds 00:26:00.978 00:26:00.978 Latency(us) 00:26:00.978 [2024-11-18T12:08:58.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.978 [2024-11-18T12:08:58.680Z] =================================================================================================================== 00:26:00.978 [2024-11-18T12:08:58.680Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:00.978 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2468060 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2468531 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2468531 /var/tmp/bperf.sock 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2468531 ']' 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:01.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:01.237 13:08:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:01.237 [2024-11-18 13:08:58.873085] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:26:01.237 [2024-11-18 13:08:58.873134] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468531 ] 00:26:01.237 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:01.237 Zero copy mechanism will not be used. 00:26:01.496 [2024-11-18 13:08:58.947114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.496 [2024-11-18 13:08:58.990365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.496 13:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:01.496 13:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:01.496 13:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:01.496 13:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:01.496 13:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:01.754 13:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.754 13:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.012 nvme0n1 00:26:02.270 13:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:02.270 13:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:02.270 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:02.270 Zero copy mechanism will not be used. 00:26:02.270 Running I/O for 2 seconds... 00:26:04.144 5849.00 IOPS, 731.12 MiB/s [2024-11-18T12:09:01.846Z] 5806.50 IOPS, 725.81 MiB/s 00:26:04.144 Latency(us) 00:26:04.144 [2024-11-18T12:09:01.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.144 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:04.144 nvme0n1 : 2.00 5805.06 725.63 0.00 0.00 2753.51 641.11 6724.56 00:26:04.144 [2024-11-18T12:09:01.846Z] =================================================================================================================== 00:26:04.144 [2024-11-18T12:09:01.846Z] Total : 5805.06 725.63 0.00 0.00 2753.51 641.11 6724.56 00:26:04.144 { 00:26:04.144 "results": [ 00:26:04.144 { 00:26:04.144 "job": "nvme0n1", 00:26:04.144 "core_mask": "0x2", 00:26:04.144 "workload": "randread", 00:26:04.144 "status": "finished", 00:26:04.144 "queue_depth": 16, 00:26:04.144 "io_size": 131072, 00:26:04.144 "runtime": 2.003252, 00:26:04.144 "iops": 5805.060970861379, 00:26:04.144 "mibps": 725.6326213576724, 00:26:04.144 "io_failed": 0, 00:26:04.144 "io_timeout": 0, 00:26:04.144 "avg_latency_us": 2753.5117923332596, 00:26:04.144 "min_latency_us": 641.1130434782609, 00:26:04.144 "max_latency_us": 6724.5634782608695 00:26:04.145 } 00:26:04.145 ], 00:26:04.145 "core_count": 1 00:26:04.145 } 00:26:04.145 13:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:04.145 13:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:04.145 13:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:04.145 13:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:04.145 | select(.opcode=="crc32c") 00:26:04.145 | "\(.module_name) \(.executed)"' 00:26:04.145 13:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2468531 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2468531 ']' 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2468531 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2468531 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2468531' 00:26:04.404 killing process with pid 2468531 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2468531 00:26:04.404 Received shutdown signal, test time was about 2.000000 seconds 00:26:04.404 00:26:04.404 Latency(us) 00:26:04.404 [2024-11-18T12:09:02.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.404 [2024-11-18T12:09:02.106Z] =================================================================================================================== 00:26:04.404 [2024-11-18T12:09:02.106Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:04.404 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2468531 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2469304 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2469304 /var/tmp/bperf.sock 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2469304 ']' 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:04.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:04.663 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:04.663 [2024-11-18 13:09:02.257057] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:26:04.663 [2024-11-18 13:09:02.257105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469304 ] 00:26:04.663 [2024-11-18 13:09:02.315048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.922 [2024-11-18 13:09:02.362524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.922 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:04.922 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:04.922 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:04.922 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:04.922 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:05.181 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:05.181 13:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:05.441 nvme0n1 00:26:05.441 13:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:05.441 13:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:05.701 Running I/O for 2 seconds... 00:26:07.576 26518.00 IOPS, 103.59 MiB/s [2024-11-18T12:09:05.278Z] 26523.00 IOPS, 103.61 MiB/s 00:26:07.576 Latency(us) 00:26:07.576 [2024-11-18T12:09:05.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.576 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:07.576 nvme0n1 : 2.00 26523.04 103.61 0.00 0.00 4817.56 3618.73 11511.54 00:26:07.576 [2024-11-18T12:09:05.278Z] =================================================================================================================== 00:26:07.576 [2024-11-18T12:09:05.278Z] Total : 26523.04 103.61 0.00 0.00 4817.56 3618.73 11511.54 00:26:07.576 { 00:26:07.576 "results": [ 00:26:07.576 { 00:26:07.576 "job": "nvme0n1", 00:26:07.576 "core_mask": "0x2", 00:26:07.576 "workload": "randwrite", 00:26:07.576 "status": "finished", 00:26:07.576 "queue_depth": 128, 00:26:07.576 "io_size": 4096, 00:26:07.576 "runtime": 2.004521, 00:26:07.576 "iops": 26523.044657551603, 00:26:07.576 "mibps": 103.60564319356095, 00:26:07.576 "io_failed": 0, 00:26:07.576 "io_timeout": 0, 00:26:07.576 "avg_latency_us": 4817.563623433741, 00:26:07.576 "min_latency_us": 3618.7269565217393, 00:26:07.576 "max_latency_us": 11511.540869565217 00:26:07.576 } 00:26:07.576 ], 00:26:07.576 "core_count": 1 00:26:07.576 } 00:26:07.576 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:07.576 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:07.576 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:07.576 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:07.576 | select(.opcode=="crc32c") 00:26:07.576 | "\(.module_name) \(.executed)"' 00:26:07.576 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:07.835 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:07.835 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:07.835 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:07.835 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:07.835 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2469304 00:26:07.835 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2469304 ']' 00:26:07.835 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2469304 00:26:07.835 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:07.835 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:07.836 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2469304 00:26:07.836 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:07.836 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:07.836 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2469304' 00:26:07.836 killing process with pid 2469304 00:26:07.836 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2469304 00:26:07.836 Received shutdown signal, test time was about 2.000000 seconds 00:26:07.836 00:26:07.836 Latency(us) 00:26:07.836 [2024-11-18T12:09:05.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.836 [2024-11-18T12:09:05.538Z] =================================================================================================================== 00:26:07.836 [2024-11-18T12:09:05.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.836 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2469304 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2469980 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2469980 /var/tmp/bperf.sock 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2469980 ']' 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:08.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:08.095 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:08.095 [2024-11-18 13:09:05.671192] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:26:08.095 [2024-11-18 13:09:05.671238] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469980 ] 00:26:08.095 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:08.095 Zero copy mechanism will not be used. 00:26:08.095 [2024-11-18 13:09:05.729450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.095 [2024-11-18 13:09:05.774063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.355 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:08.355 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:08.355 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:08.355 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:08.355 13:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:08.614 13:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.614 13:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.872 nvme0n1 00:26:08.872 13:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:08.872 13:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:08.872 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:08.872 Zero copy mechanism will not be used. 00:26:08.872 Running I/O for 2 seconds... 00:26:11.189 5582.00 IOPS, 697.75 MiB/s [2024-11-18T12:09:08.891Z] 5750.50 IOPS, 718.81 MiB/s 00:26:11.189 Latency(us) 00:26:11.189 [2024-11-18T12:09:08.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.189 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:11.189 nvme0n1 : 2.00 5749.74 718.72 0.00 0.00 2778.59 1966.08 12765.27 00:26:11.189 [2024-11-18T12:09:08.891Z] =================================================================================================================== 00:26:11.189 [2024-11-18T12:09:08.891Z] Total : 5749.74 718.72 0.00 0.00 2778.59 1966.08 12765.27 00:26:11.189 { 00:26:11.189 "results": [ 00:26:11.189 { 00:26:11.189 "job": "nvme0n1", 00:26:11.189 "core_mask": "0x2", 00:26:11.189 "workload": "randwrite", 00:26:11.189 "status": "finished", 00:26:11.189 "queue_depth": 16, 00:26:11.189 "io_size": 131072, 00:26:11.189 "runtime": 2.003569, 00:26:11.189 "iops": 5749.739589702176, 00:26:11.189 "mibps": 718.717448712772, 00:26:11.189 "io_failed": 0, 00:26:11.189 "io_timeout": 0, 00:26:11.189 "avg_latency_us": 2778.592463768116, 00:26:11.189 "min_latency_us": 1966.08, 00:26:11.189 "max_latency_us": 12765.27304347826 00:26:11.189 } 00:26:11.189 ], 00:26:11.189 "core_count": 1 00:26:11.189 } 00:26:11.189 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:11.189 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:11.189 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:11.189 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:11.189 | select(.opcode=="crc32c") 00:26:11.189 | "\(.module_name) \(.executed)"' 00:26:11.189 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:11.189 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:11.189 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:11.189 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:11.189 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:11.189 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2469980 00:26:11.189 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2469980 ']' 00:26:11.189 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2469980 00:26:11.189 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:11.189 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:11.190 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2469980 00:26:11.190 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:11.190 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:11.190 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2469980' 00:26:11.190 killing process with pid 2469980 00:26:11.190 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2469980 00:26:11.190 Received shutdown signal, test time was about 2.000000 seconds 00:26:11.190 00:26:11.190 Latency(us) 00:26:11.190 [2024-11-18T12:09:08.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.190 [2024-11-18T12:09:08.892Z] =================================================================================================================== 00:26:11.190 [2024-11-18T12:09:08.892Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.190 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2469980 00:26:11.448 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2468027 00:26:11.448 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2468027 ']' 00:26:11.448 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2468027 00:26:11.448 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:11.448 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:11.448 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2468027 00:26:11.448 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:11.448 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:11.448 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2468027' 00:26:11.448 killing process with pid 2468027 00:26:11.448 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2468027 00:26:11.448 13:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2468027 00:26:11.448 00:26:11.448 real 0m13.971s 00:26:11.448 user 0m26.821s 00:26:11.448 sys 0m4.458s 00:26:11.448 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:11.448 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.448 ************************************ 00:26:11.448 END TEST nvmf_digest_clean 00:26:11.448 ************************************ 00:26:11.448 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:11.448 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:11.448 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:11.448 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:11.707 ************************************ 00:26:11.707 START TEST nvmf_digest_error 00:26:11.707 ************************************ 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2470741 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2470741 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2470741 ']' 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.707 [2024-11-18 13:09:09.236704] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:26:11.707 [2024-11-18 13:09:09.236748] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.707 [2024-11-18 13:09:09.316786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.707 [2024-11-18 13:09:09.357456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.707 [2024-11-18 13:09:09.357491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.707 [2024-11-18 13:09:09.357498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.707 [2024-11-18 13:09:09.357504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.707 [2024-11-18 13:09:09.357509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.707 [2024-11-18 13:09:09.358109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:11.707 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.965 [2024-11-18 13:09:09.426553] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.965 null0 00:26:11.965 [2024-11-18 13:09:09.517910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.965 [2024-11-18 13:09:09.542117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2470945 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2470945 /var/tmp/bperf.sock 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2470945 ']' 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:11.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:11.965 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.965 [2024-11-18 13:09:09.596418] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:26:11.965 [2024-11-18 13:09:09.596458] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470945 ] 00:26:12.225 [2024-11-18 13:09:09.670511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.225 [2024-11-18 13:09:09.711530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.225 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:12.225 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:12.225 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:12.225 13:09:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:12.484 13:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:12.485 13:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.485 13:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:12.485 13:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.485 13:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.485 13:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.744 nvme0n1 00:26:12.744 13:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:12.744 13:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.744 13:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.004 13:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.004 13:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:13.004 13:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:13.004 Running I/O for 2 seconds... 00:26:13.004 [2024-11-18 13:09:10.553891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.553925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.553936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.004 [2024-11-18 13:09:10.564702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.564727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.564737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.004 [2024-11-18 13:09:10.576657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.576680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.576688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.004 [2024-11-18 13:09:10.584865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.584887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.584895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.004 [2024-11-18 13:09:10.595357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.595379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.595388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.004 [2024-11-18 13:09:10.606756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.606779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.606788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.004 [2024-11-18 13:09:10.615946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.615967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.615974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.004 [2024-11-18 13:09:10.625210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.625235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.625243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.004 [2024-11-18 13:09:10.635621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.635643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.635651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.004 [2024-11-18 13:09:10.645943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.645964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.645973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.004 [2024-11-18 13:09:10.657164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.657184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.657193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.004 [2024-11-18 13:09:10.665664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.665684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.665693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.004 [2024-11-18 13:09:10.674594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.674613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.674621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.004 [2024-11-18 13:09:10.685024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.685044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.685052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.004 [2024-11-18 13:09:10.694420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.004 [2024-11-18 13:09:10.694450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.004 [2024-11-18 13:09:10.694459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.702842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.702863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.702872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.713500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.713520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.713528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.723235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.723255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.723263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.731626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.731646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.731654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.741312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.741332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.741340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.750812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.750832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.750840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.760177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.760198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.760206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.769182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.769203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.769211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.779565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.779586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.779594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.788910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.788930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.788942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.798658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.798686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.798694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.808115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.808135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.808143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.817704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.817725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.817733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.827537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.827558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.827566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.837302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.264 [2024-11-18 13:09:10.837323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.264 [2024-11-18 13:09:10.837331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.264 [2024-11-18 13:09:10.845848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.265 [2024-11-18 13:09:10.845868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.265 [2024-11-18 13:09:10.845875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.265 [2024-11-18 13:09:10.857129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.265 [2024-11-18 13:09:10.857148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.265 [2024-11-18 13:09:10.857156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.265 [2024-11-18 13:09:10.866325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.265 [2024-11-18 13:09:10.866344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.265 [2024-11-18 13:09:10.866359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.265 [2024-11-18 13:09:10.876150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.265 [2024-11-18 13:09:10.876175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.265 [2024-11-18 13:09:10.876183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.265 [2024-11-18 13:09:10.887133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.265 [2024-11-18 13:09:10.887153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.265 [2024-11-18 13:09:10.887162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.265 [2024-11-18 13:09:10.897888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.265 [2024-11-18 13:09:10.897908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.265 [2024-11-18 13:09:10.897916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.265 [2024-11-18 13:09:10.906390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.265 [2024-11-18 13:09:10.906410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.265 [2024-11-18 13:09:10.906418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.265 [2024-11-18 13:09:10.918949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.265 [2024-11-18 13:09:10.918969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.265 [2024-11-18 13:09:10.918977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.265 [2024-11-18 13:09:10.928438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.265 [2024-11-18 13:09:10.928458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.265 [2024-11-18 13:09:10.928466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.265 [2024-11-18 13:09:10.938165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.265 [2024-11-18 13:09:10.938184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.265 [2024-11-18 13:09:10.938192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.265 [2024-11-18 13:09:10.948821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.265 [2024-11-18 13:09:10.948841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.265 [2024-11-18 13:09:10.948850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.265 [2024-11-18 13:09:10.959215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.265 [2024-11-18 13:09:10.959235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.265 [2024-11-18 13:09:10.959243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.525 [2024-11-18 13:09:10.967991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.525 [2024-11-18 13:09:10.968012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.525 [2024-11-18 13:09:10.968021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.525 [2024-11-18 13:09:10.980549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.525 [2024-11-18 13:09:10.980570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.525 [2024-11-18 13:09:10.980578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.525 [2024-11-18 13:09:10.991010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.525 [2024-11-18 13:09:10.991029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.525 [2024-11-18 13:09:10.991037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.525 [2024-11-18 13:09:10.999622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.525 [2024-11-18 13:09:10.999641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.525 [2024-11-18 13:09:10.999650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.525 [2024-11-18 13:09:11.011815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.011836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.011844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.024243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.024263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.024271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.034922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.034942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.034950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.043281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.043300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.043308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.053320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.053339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.053350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.063947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.063967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.063975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.073238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.073258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.073266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.082699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.082719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.082727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.092031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.092051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.092059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.101368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.101387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.101395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.111243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.111263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.111271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.122305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.122325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.122333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.135865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.135885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.135894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.144302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.144322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.144330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.156452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.156472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.156480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.167852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.167872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.167879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.176191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.176211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.176219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.188029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.188049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.188057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.198260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.198280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.198288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.206964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.206985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.206993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.526 [2024-11-18 13:09:11.219993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.526 [2024-11-18 13:09:11.220013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.526 [2024-11-18 13:09:11.220021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.786 [2024-11-18 13:09:11.232625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.786 [2024-11-18 13:09:11.232646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.786 [2024-11-18 13:09:11.232658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.786 [2024-11-18 13:09:11.244534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.786 [2024-11-18 13:09:11.244555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.786 [2024-11-18 13:09:11.244563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.786 [2024-11-18 13:09:11.256755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.786 [2024-11-18 13:09:11.256775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.786 [2024-11-18 13:09:11.256783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.786 [2024-11-18 13:09:11.266713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.786 [2024-11-18 13:09:11.266733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.786 [2024-11-18 13:09:11.266741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.275933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.275953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.275961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.287147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.287167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.287175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.295580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.295600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.295608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.305612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.305632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.305640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.315904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.315925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.315933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.328843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.328869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.328878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.337022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.337042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.337051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.347838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.347859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.347867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.357347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.357374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.357382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.368081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.368102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.368110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.376876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.376897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.376905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.385751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.385772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.385781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.395292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.395314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.395322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.406091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.406111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.406119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.414431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.414451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.414459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.426181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.426202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.426212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.439107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.439129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.439137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.450413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.450439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.450448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.459303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.459324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.459333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.471361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.471381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.471390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.787 [2024-11-18 13:09:11.480897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:13.787 [2024-11-18 13:09:11.480918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.787 [2024-11-18 13:09:11.480926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.492788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.492810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.492818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.505231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.505253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.505265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.516999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.517021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.517029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.525958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.525979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.525988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.538531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.538553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.538561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 24822.00 IOPS, 96.96 MiB/s [2024-11-18T12:09:11.750Z] [2024-11-18 13:09:11.551368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.551391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.551400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.562726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.562747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.562756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.571135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.571156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.571164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.583737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.583760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.583768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.593004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.593025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.593033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.604304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.604329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.604338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.615397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.615418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.615427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.624207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.624227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.624235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.634987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.635009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.635017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.646516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.646537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.646545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.654945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.654966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.654975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.667740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.667762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.667770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.680223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.680244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.680253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.692531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.692552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.692560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.704312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.704333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.704341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.713262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.713283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.713291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.726276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.726297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.048 [2024-11-18 13:09:11.726306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.048 [2024-11-18 13:09:11.734571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.048 [2024-11-18 13:09:11.734600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.049 [2024-11-18 13:09:11.734609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.746499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.746521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.746529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.758634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.758655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.758664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.767022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.767043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.767052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.779102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.779122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.779130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.789507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.789528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.789540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.799328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.799349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.799364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.809054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.809075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.809083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.818503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.818523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.818532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.828011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.828032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.828040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.836521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.836542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.836550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.848097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.848119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.848127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.858173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.858196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.858204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.867186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.867209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.867218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.879393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.879414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.879423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.890215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.890236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.890244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.899095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.899116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.899124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.909715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.909736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.909744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.920714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.920735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.920743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.929175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.929195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.929203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.940047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.309 [2024-11-18 13:09:11.940067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.309 [2024-11-18 13:09:11.940076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.309 [2024-11-18 13:09:11.949432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.310 [2024-11-18 13:09:11.949453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.310 [2024-11-18 13:09:11.949462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.310 [2024-11-18 13:09:11.959350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.310 [2024-11-18 13:09:11.959376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.310 [2024-11-18 13:09:11.959387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.310 [2024-11-18 13:09:11.969671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.310 [2024-11-18 13:09:11.969693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.310 [2024-11-18 13:09:11.969701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.310 [2024-11-18 13:09:11.979675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.310 [2024-11-18 13:09:11.979696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.310 [2024-11-18 13:09:11.979704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.310 [2024-11-18 13:09:11.988563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.310 [2024-11-18 13:09:11.988584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.310 [2024-11-18 13:09:11.988592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.310 [2024-11-18 13:09:11.999062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.310 [2024-11-18 13:09:11.999087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.310 [2024-11-18 13:09:11.999097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.570 [2024-11-18 13:09:12.011066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.570 [2024-11-18 13:09:12.011089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.570 [2024-11-18 13:09:12.011097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.570 [2024-11-18 13:09:12.019837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.570 [2024-11-18 13:09:12.019860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.570 [2024-11-18 13:09:12.019869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.570 [2024-11-18 13:09:12.031972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.570 [2024-11-18 13:09:12.031995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.570 [2024-11-18 13:09:12.032003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.570 [2024-11-18 13:09:12.043743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.570 [2024-11-18 13:09:12.043764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.570 [2024-11-18 13:09:12.043772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.570 [2024-11-18 13:09:12.051977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.570 [2024-11-18 13:09:12.052004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.570 [2024-11-18 13:09:12.052013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.570 [2024-11-18 13:09:12.061308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.570 [2024-11-18 13:09:12.061330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.570 [2024-11-18 13:09:12.061338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.570 [2024-11-18 13:09:12.070695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.570 [2024-11-18 13:09:12.070717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.570 [2024-11-18 13:09:12.070725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.570 [2024-11-18 13:09:12.080714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.570 [2024-11-18 13:09:12.080736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.570 [2024-11-18 13:09:12.080745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.570 [2024-11-18 13:09:12.089838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.570 [2024-11-18 13:09:12.089859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.570 [2024-11-18 13:09:12.089867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.570 [2024-11-18 13:09:12.099844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.570 [2024-11-18 13:09:12.099865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.099873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.108910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.108930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.108939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.119699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.119719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.119728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.131228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.131249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.131258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.140024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.140046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.140054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.150679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.150701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.150710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.161986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.162008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.162017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.172335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.172364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.172373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.180579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.180600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.180608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.191139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.191161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.191170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.199293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.199315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.199323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.210202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.210223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.210232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.222659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.222683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.222695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.232515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.232536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.232545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.242680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.242701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.242709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.254434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.254459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.254468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.571 [2024-11-18 13:09:12.262369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.571 [2024-11-18 13:09:12.262391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.571 [2024-11-18 13:09:12.262399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.831 [2024-11-18 13:09:12.272006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.831 [2024-11-18 13:09:12.272027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.831 [2024-11-18 13:09:12.272035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.831 [2024-11-18 13:09:12.283796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.831 [2024-11-18 13:09:12.283818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.831 [2024-11-18 13:09:12.283826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.831 [2024-11-18 13:09:12.295628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.831 [2024-11-18 13:09:12.295650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.831 [2024-11-18 13:09:12.295659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.831 [2024-11-18 13:09:12.307798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.831 [2024-11-18 13:09:12.307819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.831 [2024-11-18 13:09:12.307828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.831 [2024-11-18 13:09:12.316954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.831 [2024-11-18 13:09:12.316983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.831 [2024-11-18 13:09:12.316992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.831 [2024-11-18 13:09:12.330116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.831 [2024-11-18 13:09:12.330140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.831 [2024-11-18 13:09:12.330149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.831 [2024-11-18 13:09:12.339389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.339410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.339419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.350598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.350618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.350627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.362547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.362569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.362578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.371271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.371291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.371300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.384137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.384159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.384167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.396057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.396080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.396088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.405422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.405445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.405453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.418809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.418831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.418839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.431237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.431259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.431267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.443762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.443784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.443793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.452229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.452250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.452259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.465165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.465186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.465195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.475518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.475539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.475548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.487133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.487155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.487163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.498772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.498793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.498802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.507366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.507387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.507400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.517667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.517688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.517696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.832 [2024-11-18 13:09:12.527185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:14.832 [2024-11-18 13:09:12.527206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.832 [2024-11-18 13:09:12.527215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.092 [2024-11-18 13:09:12.536805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:15.092 [2024-11-18 13:09:12.536827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.092 [2024-11-18 13:09:12.536835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.092 24582.00 IOPS, 96.02 MiB/s [2024-11-18T12:09:12.794Z] [2024-11-18 13:09:12.547089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf2b370) 00:26:15.092 [2024-11-18 13:09:12.547110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.092 [2024-11-18 13:09:12.547118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.092 00:26:15.092 Latency(us) 00:26:15.092 [2024-11-18T12:09:12.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.092 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:15.092 nvme0n1 : 2.00 24598.17 96.09 0.00 0.00 5198.21 2251.02 18122.13 00:26:15.092 [2024-11-18T12:09:12.794Z] =================================================================================================================== 00:26:15.092 [2024-11-18T12:09:12.794Z] Total : 24598.17 96.09 0.00 0.00 5198.21 2251.02 18122.13 00:26:15.092 { 00:26:15.092 "results": [ 00:26:15.092 { 00:26:15.092 "job": "nvme0n1", 00:26:15.092 "core_mask": "0x2", 00:26:15.092 "workload": "randread", 00:26:15.092 "status": "finished", 00:26:15.092 "queue_depth": 128, 00:26:15.092 "io_size": 4096, 00:26:15.092 "runtime": 2.004377, 00:26:15.092 "iops": 24598.166911713713, 00:26:15.092 "mibps": 96.0865894988817, 00:26:15.092 "io_failed": 0, 00:26:15.092 "io_timeout": 0, 00:26:15.092 "avg_latency_us": 5198.208000903005, 00:26:15.092 "min_latency_us": 2251.0191304347827, 00:26:15.092 "max_latency_us": 18122.128695652173 00:26:15.092 } 00:26:15.092 ], 00:26:15.092 "core_count": 1 00:26:15.092 } 00:26:15.092 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:15.092 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:15.092 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:15.092 | .driver_specific 00:26:15.092 | .nvme_error 00:26:15.092 | .status_code 00:26:15.092 | .command_transient_transport_error' 00:26:15.092 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:15.092 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:26:15.092 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2470945 00:26:15.092 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2470945 ']' 00:26:15.092 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2470945 00:26:15.092 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:15.092 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:15.092 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2470945 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2470945' 00:26:15.352 killing process with pid 2470945 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2470945 00:26:15.352 Received shutdown signal, test time was about 2.000000 seconds 00:26:15.352 00:26:15.352 Latency(us) 00:26:15.352 [2024-11-18T12:09:13.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.352 [2024-11-18T12:09:13.054Z] =================================================================================================================== 00:26:15.352 [2024-11-18T12:09:13.054Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2470945 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2471426 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2471426 /var/tmp/bperf.sock 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2471426 ']' 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:15.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:15.352 13:09:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:15.352 [2024-11-18 13:09:13.034872] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:26:15.352 [2024-11-18 13:09:13.034919] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471426 ] 00:26:15.352 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:15.352 Zero copy mechanism will not be used. 00:26:15.612 [2024-11-18 13:09:13.112902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.612 [2024-11-18 13:09:13.155041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.612 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:15.612 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:15.612 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:15.612 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:15.872 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:15.872 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.872 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:15.872 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.872 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.872 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:16.131 nvme0n1 00:26:16.131 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:16.131 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.131 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.131 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.131 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:16.131 13:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:16.131 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:16.131 Zero copy mechanism will not be used. 00:26:16.131 Running I/O for 2 seconds... 00:26:16.392 [2024-11-18 13:09:13.838178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.838213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.838224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.845213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.845240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.845250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.852170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.852194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.852203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.859275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.859298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.859307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.865880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.865904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.865913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.872775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.872799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.872808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.879403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.879425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.879433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.886010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.886033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.886041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.893119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.893142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.893150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.899914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.899938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.899947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.906237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.906259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.906267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.912432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.912455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.912467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.918592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.918614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.918622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.924867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.924891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.924899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.931411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.931434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.931443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.938758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.938781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.938790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.945857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.945880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.945889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.953378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.953401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.953409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.960905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.960929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.392 [2024-11-18 13:09:13.960938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.392 [2024-11-18 13:09:13.967305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.392 [2024-11-18 13:09:13.967326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:13.967335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:13.975020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:13.975047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:13.975056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:13.982727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:13.982750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:13.982758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:13.990222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:13.990244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:13.990253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:13.996777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:13.996798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:13.996807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.000206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.000228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.000237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.006640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.006662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.006670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.012953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.012975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.012983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.019036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.019056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.019065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.025053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.025075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.025084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.030830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.030851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.030860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.037078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.037100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.037108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.043106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.043127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.043136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.049500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.049522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.049530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.055230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.055251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.055259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.060875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.060897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.060905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.066388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.066410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.066418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.072403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.072425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.072433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.078068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.078089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.078104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.393 [2024-11-18 13:09:14.083647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.393 [2024-11-18 13:09:14.083669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.393 [2024-11-18 13:09:14.083677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.089117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.089143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.089151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.094640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.094662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.094671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.100715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.100737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.100745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.106603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.106625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.106634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.112389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.112412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.112420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.118480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.118503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.118511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.124557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.124578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.124586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.130252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.130279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.130287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.135964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.135986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.135995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.141413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.141435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.141443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.147112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.147133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.147141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.152852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.152873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.152882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.158282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.158304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.158312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.164129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.164152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.164160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.169807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.169829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.169837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.175571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.655 [2024-11-18 13:09:14.175592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.655 [2024-11-18 13:09:14.175600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.655 [2024-11-18 13:09:14.181195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.181216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.181225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.186084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.186105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.186113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.190044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.190066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.190074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.195775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.195797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.195805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.202098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.202119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.202127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.207782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.207804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.207812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.213919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.213941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.213950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.219921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.219944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.219953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.225681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.225702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.225715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.232047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.232069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.232077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.238389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.238410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.238419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.244548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.244570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.244578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.250562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.250583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.250592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.256566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.256588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.256596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.262879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.262900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.262908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.269149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.269171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.269179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.275320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.275341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.275350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.281411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.281432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.281441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.287542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.287563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.287572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.293263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.293285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.293293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.298900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.298922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.298930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.304594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.304615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.304625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.310555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.310576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.310584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.316928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.316948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.316956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.323050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.323071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.323079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.329204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.329225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.329237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.335407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.656 [2024-11-18 13:09:14.335428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.656 [2024-11-18 13:09:14.335436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.656 [2024-11-18 13:09:14.341274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.657 [2024-11-18 13:09:14.341297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.657 [2024-11-18 13:09:14.341305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.657 [2024-11-18 13:09:14.346919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.657 [2024-11-18 13:09:14.346943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.657 [2024-11-18 13:09:14.346952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.917 [2024-11-18 13:09:14.352800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.917 [2024-11-18 13:09:14.352822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.917 [2024-11-18 13:09:14.352831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.917 [2024-11-18 13:09:14.358835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.917 [2024-11-18 13:09:14.358857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.917 [2024-11-18 13:09:14.358866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.917 [2024-11-18 13:09:14.364537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.917 [2024-11-18 13:09:14.364559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.917 [2024-11-18 13:09:14.364568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.917 [2024-11-18 13:09:14.370779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.917 [2024-11-18 13:09:14.370802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.917 [2024-11-18 13:09:14.370811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.917 [2024-11-18 13:09:14.377480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.917 [2024-11-18 13:09:14.377501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.917 [2024-11-18 13:09:14.377510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.917 [2024-11-18 13:09:14.383629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.917 [2024-11-18 13:09:14.383655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.917 [2024-11-18 13:09:14.383663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.917 [2024-11-18 13:09:14.389772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.917 [2024-11-18 13:09:14.389794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.917 [2024-11-18 13:09:14.389802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.917 [2024-11-18 13:09:14.395755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.395777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.395785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.401605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.401628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.401636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.407456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.407477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.407486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.413533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.413554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.413563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.419446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.419471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.419480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.425071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.425092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.425100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.430878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.430899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.430907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.436234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.436255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.436264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.442192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.442213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.442222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.448526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.448548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.448557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.454999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.455021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.455029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.461289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.461311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.461319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.467655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.467676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.467684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.473697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.473719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.473727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.479778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.479799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.479810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.486067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.486088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.486099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.492373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.492394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.492402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.498679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.498701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.498709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.504853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.504875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.504883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.510980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.511002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.511010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.516857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.516879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.516888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.522862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.522883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.522892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.528669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.528690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.528699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.534674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.534696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.534704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.540544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.540570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.540578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.546669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.546691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.546700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.552485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.918 [2024-11-18 13:09:14.552506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.918 [2024-11-18 13:09:14.552517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.918 [2024-11-18 13:09:14.558418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.919 [2024-11-18 13:09:14.558439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.919 [2024-11-18 13:09:14.558447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.919 [2024-11-18 13:09:14.564243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.919 [2024-11-18 13:09:14.564265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.919 [2024-11-18 13:09:14.564273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.919 [2024-11-18 13:09:14.569815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.919 [2024-11-18 13:09:14.569837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.919 [2024-11-18 13:09:14.569845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.919 [2024-11-18 13:09:14.576109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.919 [2024-11-18 13:09:14.576131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.919 [2024-11-18 13:09:14.576139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.919 [2024-11-18 13:09:14.582442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.919 [2024-11-18 13:09:14.582464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.919 [2024-11-18 13:09:14.582472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.919 [2024-11-18 13:09:14.588498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.919 [2024-11-18 13:09:14.588519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.919 [2024-11-18 13:09:14.588528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.919 [2024-11-18 13:09:14.594508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.919 [2024-11-18 13:09:14.594530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.919 [2024-11-18 13:09:14.594537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.919 [2024-11-18 13:09:14.600603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.919 [2024-11-18 13:09:14.600626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.919 [2024-11-18 13:09:14.600634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.919 [2024-11-18 13:09:14.607026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.919 [2024-11-18 13:09:14.607048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.919 [2024-11-18 13:09:14.607056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.919 [2024-11-18 13:09:14.613342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:16.919 [2024-11-18 13:09:14.613373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.919 [2024-11-18 13:09:14.613381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.619587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.619610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.619619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.625781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.625803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.625812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.631720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.631741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.631749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.637837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.637860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.637868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.644062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.644083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.644095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.650450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.650472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.650481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.656796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.656819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.656828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.663683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.663706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.663715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.669820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.669842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.669851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.675743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.675765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.675774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.681642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.681663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.681672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.687550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.687572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.687581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.693505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.693526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.693535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.699083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.699107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.699118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.704664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.704686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.704695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.710549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.710570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.180 [2024-11-18 13:09:14.710579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.180 [2024-11-18 13:09:14.715963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.180 [2024-11-18 13:09:14.715984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.715993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.721659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.721681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.721689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.727363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.727384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.727392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.733101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.733123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.733131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.738556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.738585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.738593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.744233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.744254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.744262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.750209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.750231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.750239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.756117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.756139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.756147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.761711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.761733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.761742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.767462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.767484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.767492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.773453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.773474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.773483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.778965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.778987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.778995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.784544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.784565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.784574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.790246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.790267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.790276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.795988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.796010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.796021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.801988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.802010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.802018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.807751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.807773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.807781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.813730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.813752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.813760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.819289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.819311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.819319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.824757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.824781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.824789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.830190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.830212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.830220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.181 5119.00 IOPS, 639.88 MiB/s [2024-11-18T12:09:14.883Z] [2024-11-18 13:09:14.836825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.836848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.836857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.842721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.842743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.842751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.848677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.848701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.181 [2024-11-18 13:09:14.848711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.181 [2024-11-18 13:09:14.854607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.181 [2024-11-18 13:09:14.854633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.182 [2024-11-18 13:09:14.854644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.182 [2024-11-18 13:09:14.860230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.182 [2024-11-18 13:09:14.860254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.182 [2024-11-18 13:09:14.860262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.182 [2024-11-18 13:09:14.865953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.182 [2024-11-18 13:09:14.865977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.182 [2024-11-18 13:09:14.865985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.182 [2024-11-18 13:09:14.871888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.182 [2024-11-18 13:09:14.871911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.182 [2024-11-18 13:09:14.871919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.442 [2024-11-18 13:09:14.877730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.442 [2024-11-18 13:09:14.877753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.877762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.883453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.883477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.883486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.889327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.889358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.889367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.895245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.895269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.895288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.900869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.900891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.900900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.906493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.906515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.906523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.912162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.912184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.912193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.917766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.917788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.917796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.923288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.923309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.923317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.928847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.928868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.928877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.934423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.934446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.934454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.940128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.940150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.940158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.945982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.946009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.946018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.951570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.951592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.951601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.957174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.957196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.957205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.963045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.963067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.963076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.968930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.968952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.968961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.974806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.974828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.974836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.980597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.980619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.980627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.986490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.986512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.986521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.992329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.992357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.992366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:14.998083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:14.998105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:14.998113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:15.003862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:15.003883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:15.003891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:15.009757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:15.009779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:15.009787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:15.015274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:15.015296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:15.015304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:15.020892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:15.020914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:15.020921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:15.026506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.443 [2024-11-18 13:09:15.026528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-11-18 13:09:15.026536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.443 [2024-11-18 13:09:15.032095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.032116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.032124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.037761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.037783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.037791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.043285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.043307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.043320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.048817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.048839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.048848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.054495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.054517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.054526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.060171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.060194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.060202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.065848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.065870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.065878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.071490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.071512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.071521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.077114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.077136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.077144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.082954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.082975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.082983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.088853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.088875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.088884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.094473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.094495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.094503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.100237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.100259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.100267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.106144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.106168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.106178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.111969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.111991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.112000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.117659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.117681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.117689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.123267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.123289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.123298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.128926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.128947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.128954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.444 [2024-11-18 13:09:15.134339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.444 [2024-11-18 13:09:15.134368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.444 [2024-11-18 13:09:15.134377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.705 [2024-11-18 13:09:15.139792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.705 [2024-11-18 13:09:15.139815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.705 [2024-11-18 13:09:15.139827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.705 [2024-11-18 13:09:15.145501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.705 [2024-11-18 13:09:15.145524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.705 [2024-11-18 13:09:15.145531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.705 [2024-11-18 13:09:15.151178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.705 [2024-11-18 13:09:15.151200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.705 [2024-11-18 13:09:15.151208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.705 [2024-11-18 13:09:15.156849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.705 [2024-11-18 13:09:15.156871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.705 [2024-11-18 13:09:15.156879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.705 [2024-11-18 13:09:15.162447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.705 [2024-11-18 13:09:15.162469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.705 [2024-11-18 13:09:15.162477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.705 [2024-11-18 13:09:15.168117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.705 [2024-11-18 13:09:15.168139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.705 [2024-11-18 13:09:15.168147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.705 [2024-11-18 13:09:15.173894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.705 [2024-11-18 13:09:15.173916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.705 [2024-11-18 13:09:15.173924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.705 [2024-11-18 13:09:15.179689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.705 [2024-11-18 13:09:15.179711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.705 [2024-11-18 13:09:15.179719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.705 [2024-11-18 13:09:15.185332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.705 [2024-11-18 13:09:15.185361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.705 [2024-11-18 13:09:15.185371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.705 [2024-11-18 13:09:15.191049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.705 [2024-11-18 13:09:15.191076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.705 [2024-11-18 13:09:15.191084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.705 [2024-11-18 13:09:15.196876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.196899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.196907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.202656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.202679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.202687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.208206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.208227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.208236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.213997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.214019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.214027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.219796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.219817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.219826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.225399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.225421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.225429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.231014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.231036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.231044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.236779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.236801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.236809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.242511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.242533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.242541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.248032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.248053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.248062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.253612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.253633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.253641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.259076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.259098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.259106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.264856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.264877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.264885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.270686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.270708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.270716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.276308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.276329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.276337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.281879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.281900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.281909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.287672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.287694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.287706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.293397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.293426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.293435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.299084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.299106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.299114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.304754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.304776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.304785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.310444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.310466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.310475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.316125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.316147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.316155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.321798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.321819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.321827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.327727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.327749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.327758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.333904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.333927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.333935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.340471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.340498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.340506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.348014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.706 [2024-11-18 13:09:15.348037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.706 [2024-11-18 13:09:15.348046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.706 [2024-11-18 13:09:15.355665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.707 [2024-11-18 13:09:15.355688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.707 [2024-11-18 13:09:15.355697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.707 [2024-11-18 13:09:15.363440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.707 [2024-11-18 13:09:15.363463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.707 [2024-11-18 13:09:15.363472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.707 [2024-11-18 13:09:15.370207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.707 [2024-11-18 13:09:15.370229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.707 [2024-11-18 13:09:15.370238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.707 [2024-11-18 13:09:15.376297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.707 [2024-11-18 13:09:15.376320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.707 [2024-11-18 13:09:15.376329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.707 [2024-11-18 13:09:15.383274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.707 [2024-11-18 13:09:15.383297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.707 [2024-11-18 13:09:15.383306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.707 [2024-11-18 13:09:15.390798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.707 [2024-11-18 13:09:15.390822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.707 [2024-11-18 13:09:15.390830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.707 [2024-11-18 13:09:15.399104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.707 [2024-11-18 13:09:15.399127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.707 [2024-11-18 13:09:15.399135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.967 [2024-11-18 13:09:15.406659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.967 [2024-11-18 13:09:15.406682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.967 [2024-11-18 13:09:15.406691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.967 [2024-11-18 13:09:15.413636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.967 [2024-11-18 13:09:15.413658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.967 [2024-11-18 13:09:15.413667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.967 [2024-11-18 13:09:15.420135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.967 [2024-11-18 13:09:15.420156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.967 [2024-11-18 13:09:15.420165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.967 [2024-11-18 13:09:15.426400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.967 [2024-11-18 13:09:15.426421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.967 [2024-11-18 13:09:15.426429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.967 [2024-11-18 13:09:15.432575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.967 [2024-11-18 13:09:15.432596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.967 [2024-11-18 13:09:15.432604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.967 [2024-11-18 13:09:15.438451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.967 [2024-11-18 13:09:15.438473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.967 [2024-11-18 13:09:15.438481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.967 [2024-11-18 13:09:15.444662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.967 [2024-11-18 13:09:15.444683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.967 [2024-11-18 13:09:15.444691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.967 [2024-11-18 13:09:15.450822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.967 [2024-11-18 13:09:15.450844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.967 [2024-11-18 13:09:15.450852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.456975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.456997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.457009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.463059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.463080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.463088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.468655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.468677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.468685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.474467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.474489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.474497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.479157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.479180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.479189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.484630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.484651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.484659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.489970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.489992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.490001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.495254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.495276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.495285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.500537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.500559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.500567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.505864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.505889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.505897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.511209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.511230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.511238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.514743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.514764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.514772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.519087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.519109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.519117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.524738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.524760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.524768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.530183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.530205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.530214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.535661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.535682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.535691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.541149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.541170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.541178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.546528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.546549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.546557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.551968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.551989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.551998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.557480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.557502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.557510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.563056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.563078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.563086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.568832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.568853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.568862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.574634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.574655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.574664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.579811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.579833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.579841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.585406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.585429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.585437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.590843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.590864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.590873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.968 [2024-11-18 13:09:15.596219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.968 [2024-11-18 13:09:15.596240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.968 [2024-11-18 13:09:15.596255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.969 [2024-11-18 13:09:15.601638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.969 [2024-11-18 13:09:15.601660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.969 [2024-11-18 13:09:15.601668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.969 [2024-11-18 13:09:15.607011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.969 [2024-11-18 13:09:15.607033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.969 [2024-11-18 13:09:15.607041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.969 [2024-11-18 13:09:15.612483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.969 [2024-11-18 13:09:15.612505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.969 [2024-11-18 13:09:15.612514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.969 [2024-11-18 13:09:15.618127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.969 [2024-11-18 13:09:15.618148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.969 [2024-11-18 13:09:15.618157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.969 [2024-11-18 13:09:15.623796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.969 [2024-11-18 13:09:15.623819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.969 [2024-11-18 13:09:15.623827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.969 [2024-11-18 13:09:15.629695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.969 [2024-11-18 13:09:15.629717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.969 [2024-11-18 13:09:15.629726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.969 [2024-11-18 13:09:15.635102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.969 [2024-11-18 13:09:15.635124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.969 [2024-11-18 13:09:15.635132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.969 [2024-11-18 13:09:15.640552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.969 [2024-11-18 13:09:15.640574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.969 [2024-11-18 13:09:15.640583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.969 [2024-11-18 13:09:15.646032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.969 [2024-11-18 13:09:15.646054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.969 [2024-11-18 13:09:15.646062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.969 [2024-11-18 13:09:15.651685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.969 [2024-11-18 13:09:15.651707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.969 [2024-11-18 13:09:15.651716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.969 [2024-11-18 13:09:15.657257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.969 [2024-11-18 13:09:15.657279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.969 [2024-11-18 13:09:15.657287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.969 [2024-11-18 13:09:15.662865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:17.969 [2024-11-18 13:09:15.662887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.969 [2024-11-18 13:09:15.662895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.229 [2024-11-18 13:09:15.668420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.229 [2024-11-18 13:09:15.668442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.229 [2024-11-18 13:09:15.668451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.229 [2024-11-18 13:09:15.673925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.229 [2024-11-18 13:09:15.673947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.229 [2024-11-18 13:09:15.673956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.229 [2024-11-18 13:09:15.679699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.229 [2024-11-18 13:09:15.679721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.229 [2024-11-18 13:09:15.679729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.229 [2024-11-18 13:09:15.685464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.229 [2024-11-18 13:09:15.685485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.229 [2024-11-18 13:09:15.685493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.229 [2024-11-18 13:09:15.691081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.229 [2024-11-18 13:09:15.691103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.229 [2024-11-18 13:09:15.691115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.229 [2024-11-18 13:09:15.696680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.229 [2024-11-18 13:09:15.696701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.229 [2024-11-18 13:09:15.696710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.229 [2024-11-18 13:09:15.702587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.229 [2024-11-18 13:09:15.702609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.229 [2024-11-18 13:09:15.702617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.229 [2024-11-18 13:09:15.708308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.229 [2024-11-18 13:09:15.708329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.229 [2024-11-18 13:09:15.708338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.229 [2024-11-18 13:09:15.713882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.229 [2024-11-18 13:09:15.713903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.229 [2024-11-18 13:09:15.713912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.229 [2024-11-18 13:09:15.719481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.719503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.719511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.725113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.725134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.725142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.730611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.730633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.730642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.736133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.736155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.736164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.741484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.741510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.741518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.746868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.746890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.746898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.752466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.752488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.752496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.758150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.758171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.758180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.763954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.763976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.763984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.769653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.769675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.769683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.775161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.775182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.775191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.780926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.780947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.780956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.786745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.786767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.786775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.792385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.792408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.792416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.797865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.797886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.797895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.803384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.803406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.803414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.808998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.809020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.809028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.814629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.814651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.814659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.820176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.820198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.820206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.826208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.826230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.826239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.230 [2024-11-18 13:09:15.831838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af570) 00:26:18.230 [2024-11-18 13:09:15.831861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.230 [2024-11-18 13:09:15.831869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.230 5261.50 IOPS, 657.69 MiB/s 00:26:18.230 Latency(us) 00:26:18.230 [2024-11-18T12:09:15.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.230 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:18.230 nvme0n1 : 2.00 5262.21 657.78 0.00 0.00 3037.90 723.03 8377.21 00:26:18.230 [2024-11-18T12:09:15.932Z] =================================================================================================================== 00:26:18.230 [2024-11-18T12:09:15.932Z] Total : 5262.21 657.78 0.00 0.00 3037.90 723.03 8377.21 00:26:18.230 { 00:26:18.230 "results": [ 00:26:18.230 { 00:26:18.230 "job": "nvme0n1", 00:26:18.230 "core_mask": "0x2", 00:26:18.230 "workload": "randread", 00:26:18.230 "status": "finished", 00:26:18.230 "queue_depth": 16, 00:26:18.230 "io_size": 131072, 00:26:18.230 "runtime": 2.002769, 00:26:18.230 "iops": 5262.214464074489, 00:26:18.230 "mibps": 657.7768080093111, 00:26:18.230 "io_failed": 0, 00:26:18.230 "io_timeout": 0, 00:26:18.230 "avg_latency_us": 3037.90122286992, 00:26:18.230 "min_latency_us": 723.0330434782609, 00:26:18.230 "max_latency_us": 8377.210434782608 00:26:18.230 } 00:26:18.230 ], 00:26:18.230 "core_count": 1 00:26:18.230 } 00:26:18.230 13:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:18.230 13:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:18.230 13:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:18.230 | .driver_specific 00:26:18.230 | .nvme_error 00:26:18.230 | .status_code 00:26:18.230 | .command_transient_transport_error' 00:26:18.230 13:09:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:18.490 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 339 > 0 )) 00:26:18.490 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2471426 00:26:18.490 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2471426 ']' 00:26:18.490 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2471426 00:26:18.490 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:18.490 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:18.490 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2471426 00:26:18.490 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:18.490 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:18.491 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2471426' 00:26:18.491 killing process with pid 2471426 00:26:18.491 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2471426 00:26:18.491 Received shutdown signal, test time was about 2.000000 seconds 00:26:18.491 00:26:18.491 Latency(us) 00:26:18.491 [2024-11-18T12:09:16.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.491 [2024-11-18T12:09:16.193Z] =================================================================================================================== 00:26:18.491 [2024-11-18T12:09:16.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:18.491 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2471426 00:26:18.750 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:18.750 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:18.750 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:18.750 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:18.750 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:18.750 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2471906 00:26:18.750 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2471906 /var/tmp/bperf.sock 00:26:18.750 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:18.750 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2471906 ']' 00:26:18.750 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.750 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:18.750 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.750 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:18.750 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:18.750 [2024-11-18 13:09:16.314973] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:26:18.750 [2024-11-18 13:09:16.315023] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471906 ] 00:26:18.750 [2024-11-18 13:09:16.389096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.750 [2024-11-18 13:09:16.432084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.009 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:19.009 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:19.009 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:19.009 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:19.269 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:19.269 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.269 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:19.269 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.269 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.269 13:09:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.528 nvme0n1 00:26:19.528 13:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:19.528 13:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.528 13:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:19.528 13:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.528 13:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:19.529 13:09:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:19.788 Running I/O for 2 seconds... 00:26:19.788 [2024-11-18 13:09:17.266584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ee5c8 00:26:19.788 [2024-11-18 13:09:17.267365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.267394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.275858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166eea00 00:26:19.788 [2024-11-18 13:09:17.276747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.276770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.285642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ebb98 00:26:19.788 [2024-11-18 13:09:17.286651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.286671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.295462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f9b30 00:26:19.788 [2024-11-18 13:09:17.296595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.296615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.305233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e49b0 00:26:19.788 [2024-11-18 13:09:17.306511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.306532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.313768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f92c0 00:26:19.788 [2024-11-18 13:09:17.315016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.315036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.321769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ebb98 00:26:19.788 [2024-11-18 13:09:17.322422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.322441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.331490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e1b48 00:26:19.788 [2024-11-18 13:09:17.332262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.332281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.341223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166efae0 00:26:19.788 [2024-11-18 13:09:17.342127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.342151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.350969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ed4e8 00:26:19.788 [2024-11-18 13:09:17.351995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.352015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.360701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f57b0 00:26:19.788 [2024-11-18 13:09:17.361835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.361854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.370148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e1b48 00:26:19.788 [2024-11-18 13:09:17.371286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.371304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.379529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e99d8 00:26:19.788 [2024-11-18 13:09:17.380210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.380230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.388583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f2948 00:26:19.788 [2024-11-18 13:09:17.389519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.389538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.397886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fd208 00:26:19.788 [2024-11-18 13:09:17.398806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.398826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.407505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e3d08 00:26:19.788 [2024-11-18 13:09:17.408178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.408198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.416552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f81e0 00:26:19.788 [2024-11-18 13:09:17.417521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.788 [2024-11-18 13:09:17.417541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:19.788 [2024-11-18 13:09:17.425864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fb8b8 00:26:19.788 [2024-11-18 13:09:17.426685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.789 [2024-11-18 13:09:17.426705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:19.789 [2024-11-18 13:09:17.435793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ed0b0 00:26:19.789 [2024-11-18 13:09:17.436989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.789 [2024-11-18 13:09:17.437009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:19.789 [2024-11-18 13:09:17.445256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e12d8 00:26:19.789 [2024-11-18 13:09:17.445959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.789 [2024-11-18 13:09:17.445979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:19.789 [2024-11-18 13:09:17.453990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e1f80 00:26:19.789 [2024-11-18 13:09:17.454618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.789 [2024-11-18 13:09:17.454637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:19.789 [2024-11-18 13:09:17.463030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e2c28 00:26:19.789 [2024-11-18 13:09:17.463900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.789 [2024-11-18 13:09:17.463920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:19.789 [2024-11-18 13:09:17.472589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ed0b0 00:26:19.789 [2024-11-18 13:09:17.473557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.789 [2024-11-18 13:09:17.473576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:19.789 [2024-11-18 13:09:17.482002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e7c50 00:26:19.789 [2024-11-18 13:09:17.482475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.789 [2024-11-18 13:09:17.482494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.492731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f9b30 00:26:20.047 [2024-11-18 13:09:17.494039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.494059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.502454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e8d30 00:26:20.047 [2024-11-18 13:09:17.503865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.503885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.512132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f57b0 00:26:20.047 [2024-11-18 13:09:17.513679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.513698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.518676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fdeb0 00:26:20.047 [2024-11-18 13:09:17.519375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.519394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.530597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166df118 00:26:20.047 [2024-11-18 13:09:17.531992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.532011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.540291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f1430 00:26:20.047 [2024-11-18 13:09:17.541839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.541860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.546818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166dfdc0 00:26:20.047 [2024-11-18 13:09:17.547513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.547532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.556531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f96f8 00:26:20.047 [2024-11-18 13:09:17.557476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.557495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.565945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e5220 00:26:20.047 [2024-11-18 13:09:17.566886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.566908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.575673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e6300 00:26:20.047 [2024-11-18 13:09:17.576630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.576651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.585380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f1868 00:26:20.047 [2024-11-18 13:09:17.586561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.586583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.594793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ebb98 00:26:20.047 [2024-11-18 13:09:17.595974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.595994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.603907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e0a68 00:26:20.047 [2024-11-18 13:09:17.604633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.604653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.612690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e27f0 00:26:20.047 [2024-11-18 13:09:17.613762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.613782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.624333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ef6a8 00:26:20.047 [2024-11-18 13:09:17.625891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.047 [2024-11-18 13:09:17.625910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:20.047 [2024-11-18 13:09:17.630873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f8e88 00:26:20.048 [2024-11-18 13:09:17.631589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.048 [2024-11-18 13:09:17.631608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:20.048 [2024-11-18 13:09:17.642804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e0a68 00:26:20.048 [2024-11-18 13:09:17.644153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.048 [2024-11-18 13:09:17.644172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:20.048 [2024-11-18 13:09:17.652511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f9b30 00:26:20.048 [2024-11-18 13:09:17.654087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.048 [2024-11-18 13:09:17.654105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:20.048 [2024-11-18 13:09:17.659181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f7da8 00:26:20.048 [2024-11-18 13:09:17.660028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.048 [2024-11-18 13:09:17.660047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:20.048 [2024-11-18 13:09:17.670143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166de470 00:26:20.048 [2024-11-18 13:09:17.671160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.048 [2024-11-18 13:09:17.671181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:20.048 [2024-11-18 13:09:17.680619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166de470 00:26:20.048 [2024-11-18 13:09:17.682109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.048 [2024-11-18 13:09:17.682129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:20.048 [2024-11-18 13:09:17.687233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ebb98 00:26:20.048 [2024-11-18 13:09:17.687986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.048 [2024-11-18 13:09:17.688006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:20.048 [2024-11-18 13:09:17.699330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e9168 00:26:20.048 [2024-11-18 13:09:17.700819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.048 [2024-11-18 13:09:17.700839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:20.048 [2024-11-18 13:09:17.705897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f9b30 00:26:20.048 [2024-11-18 13:09:17.706655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.048 [2024-11-18 13:09:17.706674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:20.048 [2024-11-18 13:09:17.715797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ebb98 00:26:20.048 [2024-11-18 13:09:17.716558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.048 [2024-11-18 13:09:17.716577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:20.048 [2024-11-18 13:09:17.725481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f1ca0 00:26:20.048 [2024-11-18 13:09:17.726626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.048 [2024-11-18 13:09:17.726646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:20.048 [2024-11-18 13:09:17.735132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f7100 00:26:20.048 [2024-11-18 13:09:17.735784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.048 [2024-11-18 13:09:17.735804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:20.048 [2024-11-18 13:09:17.744150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e5ec8 00:26:20.048 [2024-11-18 13:09:17.744707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.048 [2024-11-18 13:09:17.744726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.755718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f96f8 00:26:20.306 [2024-11-18 13:09:17.757297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.757316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.762260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e0630 00:26:20.306 [2024-11-18 13:09:17.762989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.763007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.773883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166eff18 00:26:20.306 [2024-11-18 13:09:17.775344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.775369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.780750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f8e88 00:26:20.306 [2024-11-18 13:09:17.781484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.781502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.790154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e3d08 00:26:20.306 [2024-11-18 13:09:17.790799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.790818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.800390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f57b0 00:26:20.306 [2024-11-18 13:09:17.801152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.801172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.810635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f2d80 00:26:20.306 [2024-11-18 13:09:17.811871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.811889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.819333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166de8a8 00:26:20.306 [2024-11-18 13:09:17.820278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.820297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.829405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166eaef0 00:26:20.306 [2024-11-18 13:09:17.830759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.830783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.838170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e88f8 00:26:20.306 [2024-11-18 13:09:17.839196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.839215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.847728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fac10 00:26:20.306 [2024-11-18 13:09:17.848855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.848874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.857202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166eee38 00:26:20.306 [2024-11-18 13:09:17.857864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.857884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.866718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e12d8 00:26:20.306 [2024-11-18 13:09:17.867630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.867650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.876417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e6b70 00:26:20.306 [2024-11-18 13:09:17.877671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.877692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.884883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e5a90 00:26:20.306 [2024-11-18 13:09:17.886122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.886143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.895108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166de038 00:26:20.306 [2024-11-18 13:09:17.896142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.896162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.904471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166de038 00:26:20.306 [2024-11-18 13:09:17.905501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.905521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.913109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166de8a8 00:26:20.306 [2024-11-18 13:09:17.914222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.914244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.922841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e6b70 00:26:20.306 [2024-11-18 13:09:17.924072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.924092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.931473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fbcf0 00:26:20.306 [2024-11-18 13:09:17.932250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.306 [2024-11-18 13:09:17.932269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:20.306 [2024-11-18 13:09:17.941029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e5a90 00:26:20.306 [2024-11-18 13:09:17.942069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.307 [2024-11-18 13:09:17.942088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:20.307 [2024-11-18 13:09:17.950481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166df118 00:26:20.307 [2024-11-18 13:09:17.951498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.307 [2024-11-18 13:09:17.951517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:20.307 [2024-11-18 13:09:17.960400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e5220 00:26:20.307 [2024-11-18 13:09:17.961540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.307 [2024-11-18 13:09:17.961558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:20.307 [2024-11-18 13:09:17.968211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fc998 00:26:20.307 [2024-11-18 13:09:17.968728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.307 [2024-11-18 13:09:17.968747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:20.307 [2024-11-18 13:09:17.977611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ebb98 00:26:20.307 [2024-11-18 13:09:17.978368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.307 [2024-11-18 13:09:17.978388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:20.307 [2024-11-18 13:09:17.986309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ed920 00:26:20.307 [2024-11-18 13:09:17.987068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.307 [2024-11-18 13:09:17.987087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:20.307 [2024-11-18 13:09:17.997222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ddc00 00:26:20.307 [2024-11-18 13:09:17.998239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.307 [2024-11-18 13:09:17.998259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.005805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e5220 00:26:20.567 [2024-11-18 13:09:18.006890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.006909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.017154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166eaab8 00:26:20.567 [2024-11-18 13:09:18.018768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.018788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.023937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f20d8 00:26:20.567 [2024-11-18 13:09:18.024856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.024875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.035623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e8088 00:26:20.567 [2024-11-18 13:09:18.037030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.037049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.042479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ebb98 00:26:20.567 [2024-11-18 13:09:18.043130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.043150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.052486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fcdd0 00:26:20.567 [2024-11-18 13:09:18.053271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.053290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.064004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166df118 00:26:20.567 [2024-11-18 13:09:18.065230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.065249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.073761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166feb58 00:26:20.567 [2024-11-18 13:09:18.075159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.075178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.080556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166edd58 00:26:20.567 [2024-11-18 13:09:18.081236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.081256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.091928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f8a50 00:26:20.567 [2024-11-18 13:09:18.092974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.092994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.100713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166dece0 00:26:20.567 [2024-11-18 13:09:18.101675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.101694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.109916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166efae0 00:26:20.567 [2024-11-18 13:09:18.110947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.110966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.119589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f6458 00:26:20.567 [2024-11-18 13:09:18.120748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.120767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.129310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fe2e8 00:26:20.567 [2024-11-18 13:09:18.130586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.130604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.139002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fb8b8 00:26:20.567 [2024-11-18 13:09:18.140419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.140438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.145777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fef90 00:26:20.567 [2024-11-18 13:09:18.146502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.146521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.157391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ebb98 00:26:20.567 [2024-11-18 13:09:18.158594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.158616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.166765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f6020 00:26:20.567 [2024-11-18 13:09:18.167507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.167526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.175498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e1b48 00:26:20.567 [2024-11-18 13:09:18.176228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.176247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.184575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f4b08 00:26:20.567 [2024-11-18 13:09:18.185456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.567 [2024-11-18 13:09:18.185476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:20.567 [2024-11-18 13:09:18.195615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e0630 00:26:20.568 [2024-11-18 13:09:18.197052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.568 [2024-11-18 13:09:18.197071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:20.568 [2024-11-18 13:09:18.202425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f57b0 00:26:20.568 [2024-11-18 13:09:18.203132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.568 [2024-11-18 13:09:18.203150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:20.568 [2024-11-18 13:09:18.213903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166feb58 00:26:20.568 [2024-11-18 13:09:18.215113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.568 [2024-11-18 13:09:18.215132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:20.568 [2024-11-18 13:09:18.223359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ea680 00:26:20.568 [2024-11-18 13:09:18.224107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.568 [2024-11-18 13:09:18.224126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:20.568 [2024-11-18 13:09:18.232743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f2948 00:26:20.568 [2024-11-18 13:09:18.233857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.568 [2024-11-18 13:09:18.233876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:20.568 [2024-11-18 13:09:18.241809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fef90 00:26:20.568 [2024-11-18 13:09:18.242680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.568 [2024-11-18 13:09:18.242699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:20.568 [2024-11-18 13:09:18.250722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166eea00 00:26:20.568 [2024-11-18 13:09:18.251981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.568 [2024-11-18 13:09:18.252000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:20.568 27087.00 IOPS, 105.81 MiB/s [2024-11-18T12:09:18.270Z] [2024-11-18 13:09:18.259172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e4140 00:26:20.568 [2024-11-18 13:09:18.259886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.568 [2024-11-18 13:09:18.259905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.268950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e27f0 00:26:20.829 [2024-11-18 13:09:18.269792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.269813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.278315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e3060 00:26:20.829 [2024-11-18 13:09:18.279268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.279288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.287932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e4de8 00:26:20.829 [2024-11-18 13:09:18.288885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.288905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.297002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ea248 00:26:20.829 [2024-11-18 13:09:18.297522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.297543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.307018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e5658 00:26:20.829 [2024-11-18 13:09:18.307669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.307689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.316995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f5378 00:26:20.829 [2024-11-18 13:09:18.317764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.317787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.327855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166df118 00:26:20.829 [2024-11-18 13:09:18.329398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.329417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.334493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ea680 00:26:20.829 [2024-11-18 13:09:18.335200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.335218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.345114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166eaef0 00:26:20.829 [2024-11-18 13:09:18.345985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.346005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.353762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ddc00 00:26:20.829 [2024-11-18 13:09:18.355061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.355080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.361731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e27f0 00:26:20.829 [2024-11-18 13:09:18.362405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.362424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.371434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f4298 00:26:20.829 [2024-11-18 13:09:18.372232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.372252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.380860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f4b08 00:26:20.829 [2024-11-18 13:09:18.381714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.381733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.391778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166eaef0 00:26:20.829 [2024-11-18 13:09:18.392977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.392996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.401082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ee190 00:26:20.829 [2024-11-18 13:09:18.402370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.402392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.410813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ec840 00:26:20.829 [2024-11-18 13:09:18.412221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.412240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.420510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f6890 00:26:20.829 [2024-11-18 13:09:18.422045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.422065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.427052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fb8b8 00:26:20.829 [2024-11-18 13:09:18.427778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.427797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.437101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ea248 00:26:20.829 [2024-11-18 13:09:18.438394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.438414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.445683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e5220 00:26:20.829 [2024-11-18 13:09:18.446387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.829 [2024-11-18 13:09:18.446406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:20.829 [2024-11-18 13:09:18.455244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e5a90 00:26:20.830 [2024-11-18 13:09:18.456076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.830 [2024-11-18 13:09:18.456095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:20.830 [2024-11-18 13:09:18.464503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ef270 00:26:20.830 [2024-11-18 13:09:18.465429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.830 [2024-11-18 13:09:18.465449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:20.830 [2024-11-18 13:09:18.473936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f8e88 00:26:20.830 [2024-11-18 13:09:18.474438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.830 [2024-11-18 13:09:18.474458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:20.830 [2024-11-18 13:09:18.483649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e0ea0 00:26:20.830 [2024-11-18 13:09:18.484364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.830 [2024-11-18 13:09:18.484383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:20.830 [2024-11-18 13:09:18.493338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ebb98 00:26:20.830 [2024-11-18 13:09:18.494079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.830 [2024-11-18 13:09:18.494098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:20.830 [2024-11-18 13:09:18.503964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f1ca0 00:26:20.830 [2024-11-18 13:09:18.505502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.830 [2024-11-18 13:09:18.505523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:20.830 [2024-11-18 13:09:18.510520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e38d0 00:26:20.830 [2024-11-18 13:09:18.511222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.830 [2024-11-18 13:09:18.511241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:20.830 [2024-11-18 13:09:18.519905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166edd58 00:26:20.830 [2024-11-18 13:09:18.520641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.830 [2024-11-18 13:09:18.520660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:21.090 [2024-11-18 13:09:18.529239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fcdd0 00:26:21.091 [2024-11-18 13:09:18.529878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.529897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.538081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ecc78 00:26:21.091 [2024-11-18 13:09:18.538789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.538808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.548460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166dece0 00:26:21.091 [2024-11-18 13:09:18.549294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.549314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.557726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e27f0 00:26:21.091 [2024-11-18 13:09:18.558617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.558636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.566990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f7100 00:26:21.091 [2024-11-18 13:09:18.567873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.567892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.576236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e3d08 00:26:21.091 [2024-11-18 13:09:18.577080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.577099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.584885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e84c0 00:26:21.091 [2024-11-18 13:09:18.585717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.585735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.595199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e2c28 00:26:21.091 [2024-11-18 13:09:18.596195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.596215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.604463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e6fa8 00:26:21.091 [2024-11-18 13:09:18.605428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.605447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.613717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e0630 00:26:21.091 [2024-11-18 13:09:18.614710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.614729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.623006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166df988 00:26:21.091 [2024-11-18 13:09:18.624011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.624030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.632301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f0bc0 00:26:21.091 [2024-11-18 13:09:18.633280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.633299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.641791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e5ec8 00:26:21.091 [2024-11-18 13:09:18.642776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.642799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.651049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fb8b8 00:26:21.091 [2024-11-18 13:09:18.652005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.652025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.660324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ecc78 00:26:21.091 [2024-11-18 13:09:18.661280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.661299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.669583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ec840 00:26:21.091 [2024-11-18 13:09:18.670558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.670577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.678837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e99d8 00:26:21.091 [2024-11-18 13:09:18.679796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.679815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.688109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e8d30 00:26:21.091 [2024-11-18 13:09:18.689061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.689080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.697385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e7c50 00:26:21.091 [2024-11-18 13:09:18.698335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.698360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.706662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e84c0 00:26:21.091 [2024-11-18 13:09:18.707619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.707638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.715944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f5be8 00:26:21.091 [2024-11-18 13:09:18.716897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.716916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.725199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e7818 00:26:21.091 [2024-11-18 13:09:18.726150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.726169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.734445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ed4e8 00:26:21.091 [2024-11-18 13:09:18.735389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.091 [2024-11-18 13:09:18.735408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.091 [2024-11-18 13:09:18.743665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ed4e8 00:26:21.092 [2024-11-18 13:09:18.744611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.092 [2024-11-18 13:09:18.744630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.092 [2024-11-18 13:09:18.752876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ed4e8 00:26:21.092 [2024-11-18 13:09:18.753845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.092 [2024-11-18 13:09:18.753864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.092 [2024-11-18 13:09:18.762115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ed4e8 00:26:21.092 [2024-11-18 13:09:18.763089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.092 [2024-11-18 13:09:18.763108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.092 [2024-11-18 13:09:18.771617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e4140 00:26:21.092 [2024-11-18 13:09:18.772344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.092 [2024-11-18 13:09:18.772368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:21.092 [2024-11-18 13:09:18.780105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166df118 00:26:21.092 [2024-11-18 13:09:18.780850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.092 [2024-11-18 13:09:18.780869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.790090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166de038 00:26:21.353 [2024-11-18 13:09:18.791060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.791079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.799346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e12d8 00:26:21.353 [2024-11-18 13:09:18.800301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.800320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.808632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166edd58 00:26:21.353 [2024-11-18 13:09:18.809607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.809625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.817884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f4b08 00:26:21.353 [2024-11-18 13:09:18.818859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.818878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.827118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e7818 00:26:21.353 [2024-11-18 13:09:18.828073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.828092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.836417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f7970 00:26:21.353 [2024-11-18 13:09:18.837393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.837411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.845670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e8088 00:26:21.353 [2024-11-18 13:09:18.846619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.846638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.854910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f0bc0 00:26:21.353 [2024-11-18 13:09:18.855862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.855881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.864188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fb8b8 00:26:21.353 [2024-11-18 13:09:18.865142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.865161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.872807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166df118 00:26:21.353 [2024-11-18 13:09:18.873746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.873765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.883163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f5378 00:26:21.353 [2024-11-18 13:09:18.884224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.884247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.892748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e0ea0 00:26:21.353 [2024-11-18 13:09:18.893942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.893962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.900925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e0630 00:26:21.353 [2024-11-18 13:09:18.901803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.901823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.909527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e27f0 00:26:21.353 [2024-11-18 13:09:18.910360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.910390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.919309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fd208 00:26:21.353 [2024-11-18 13:09:18.920293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.920312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.929609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e23b8 00:26:21.353 [2024-11-18 13:09:18.930729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.930749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.938903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f2d80 00:26:21.353 [2024-11-18 13:09:18.940029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.940048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.949345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166eee38 00:26:21.353 [2024-11-18 13:09:18.950857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.950877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.957364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e12d8 00:26:21.353 [2024-11-18 13:09:18.958473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.958492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.966603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e12d8 00:26:21.353 [2024-11-18 13:09:18.967730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.967749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.975833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e12d8 00:26:21.353 [2024-11-18 13:09:18.976860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.976879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.985399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ddc00 00:26:21.353 [2024-11-18 13:09:18.986648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.986667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:18.994214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f0350 00:26:21.353 [2024-11-18 13:09:18.995421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:18.995440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:19.003920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f1430 00:26:21.353 [2024-11-18 13:09:19.005247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.353 [2024-11-18 13:09:19.005265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:21.353 [2024-11-18 13:09:19.013649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166dece0 00:26:21.354 [2024-11-18 13:09:19.015144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.354 [2024-11-18 13:09:19.015163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:21.354 [2024-11-18 13:09:19.023366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e6300 00:26:21.354 [2024-11-18 13:09:19.024862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.354 [2024-11-18 13:09:19.024881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:21.354 [2024-11-18 13:09:19.031377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e38d0 00:26:21.354 [2024-11-18 13:09:19.032389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.354 [2024-11-18 13:09:19.032408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:21.354 [2024-11-18 13:09:19.040563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e9168 00:26:21.354 [2024-11-18 13:09:19.041657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.354 [2024-11-18 13:09:19.041676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:21.354 [2024-11-18 13:09:19.049615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fac10 00:26:21.613 [2024-11-18 13:09:19.050761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.613 [2024-11-18 13:09:19.050780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:21.613 [2024-11-18 13:09:19.059399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166eaef0 00:26:21.613 [2024-11-18 13:09:19.060613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.613 [2024-11-18 13:09:19.060632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:21.613 [2024-11-18 13:09:19.069126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e8088 00:26:21.613 [2024-11-18 13:09:19.070456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.613 [2024-11-18 13:09:19.070475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:21.613 [2024-11-18 13:09:19.078816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fb480 00:26:21.613 [2024-11-18 13:09:19.080277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.613 [2024-11-18 13:09:19.080296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:21.613 [2024-11-18 13:09:19.088254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ee5c8 00:26:21.613 [2024-11-18 13:09:19.089722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.613 [2024-11-18 13:09:19.089741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:21.613 [2024-11-18 13:09:19.097310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166dfdc0 00:26:21.613 [2024-11-18 13:09:19.098754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.613 [2024-11-18 13:09:19.098773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.613 [2024-11-18 13:09:19.105921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f2948 00:26:21.613 [2024-11-18 13:09:19.107021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.613 [2024-11-18 13:09:19.107041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:21.613 [2024-11-18 13:09:19.115111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e99d8 00:26:21.613 [2024-11-18 13:09:19.116203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.613 [2024-11-18 13:09:19.116222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:21.613 [2024-11-18 13:09:19.125568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e1f80 00:26:21.613 [2024-11-18 13:09:19.127124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.613 [2024-11-18 13:09:19.127147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:21.613 [2024-11-18 13:09:19.132250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e3498 00:26:21.613 [2024-11-18 13:09:19.133070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.613 [2024-11-18 13:09:19.133089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:21.613 [2024-11-18 13:09:19.141713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166eee38 00:26:21.613 [2024-11-18 13:09:19.142531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.613 [2024-11-18 13:09:19.142550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:21.613 [2024-11-18 13:09:19.151985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166fda78 00:26:21.613 [2024-11-18 13:09:19.152965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.613 [2024-11-18 13:09:19.152985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:21.613 [2024-11-18 13:09:19.160407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f5378 00:26:21.613 [2024-11-18 13:09:19.161224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.613 [2024-11-18 13:09:19.161243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:21.613 [2024-11-18 13:09:19.169829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166feb58 00:26:21.613 [2024-11-18 13:09:19.170638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.613 [2024-11-18 13:09:19.170658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:21.613 [2024-11-18 13:09:19.179916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166f7da8 00:26:21.613 [2024-11-18 13:09:19.180988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.614 [2024-11-18 13:09:19.181007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:21.614 [2024-11-18 13:09:19.189362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e6b70 00:26:21.614 [2024-11-18 13:09:19.190441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.614 [2024-11-18 13:09:19.190462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:21.614 [2024-11-18 13:09:19.198924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e1710 00:26:21.614 [2024-11-18 13:09:19.200023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.614 [2024-11-18 13:09:19.200043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:21.614 [2024-11-18 13:09:19.207335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e1f80 00:26:21.614 [2024-11-18 13:09:19.208393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.614 [2024-11-18 13:09:19.208413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.614 [2024-11-18 13:09:19.216690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166eee38 00:26:21.614 [2024-11-18 13:09:19.217523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.614 [2024-11-18 13:09:19.217543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:21.614 [2024-11-18 13:09:19.225919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166ea680 00:26:21.614 [2024-11-18 13:09:19.226871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.614 [2024-11-18 13:09:19.226891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:21.614 [2024-11-18 13:09:19.235244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166df118 00:26:21.614 [2024-11-18 13:09:19.235756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.614 [2024-11-18 13:09:19.235776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:21.614 [2024-11-18 13:09:19.244656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166edd58 00:26:21.614 [2024-11-18 13:09:19.245410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.614 [2024-11-18 13:09:19.245430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:21.614 [2024-11-18 13:09:19.255832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1848280) with pdu=0x2000166e0ea0 00:26:21.614 [2024-11-18 13:09:19.258043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.614 [2024-11-18 13:09:19.258064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:21.614 27275.00 IOPS, 106.54 MiB/s 00:26:21.614 Latency(us) 00:26:21.614 [2024-11-18T12:09:19.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.614 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:21.614 nvme0n1 : 2.01 27292.98 106.61 0.00 0.00 4685.85 1852.10 12765.27 00:26:21.614 [2024-11-18T12:09:19.316Z] =================================================================================================================== 00:26:21.614 [2024-11-18T12:09:19.316Z] Total : 27292.98 106.61 0.00 0.00 4685.85 1852.10 12765.27 00:26:21.614 { 00:26:21.614 "results": [ 00:26:21.614 { 00:26:21.614 "job": "nvme0n1", 00:26:21.614 "core_mask": "0x2", 00:26:21.614 "workload": "randwrite", 00:26:21.614 "status": "finished", 00:26:21.614 "queue_depth": 128, 00:26:21.614 "io_size": 4096, 00:26:21.614 "runtime": 2.007989, 00:26:21.614 "iops": 27292.978198585748, 00:26:21.614 "mibps": 106.61319608822558, 00:26:21.614 "io_failed": 0, 00:26:21.614 "io_timeout": 0, 00:26:21.614 "avg_latency_us": 4685.849580259137, 00:26:21.614 "min_latency_us": 1852.104347826087, 00:26:21.614 "max_latency_us": 12765.27304347826 00:26:21.614 } 00:26:21.614 ], 00:26:21.614 "core_count": 1 00:26:21.614 } 00:26:21.614 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:21.614 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:21.614 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:21.614 | .driver_specific 00:26:21.614 | .nvme_error 00:26:21.614 | .status_code 00:26:21.614 | .command_transient_transport_error' 00:26:21.614 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:21.873 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:26:21.873 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2471906 00:26:21.873 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2471906 ']' 00:26:21.873 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2471906 00:26:21.873 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:21.873 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:21.873 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2471906 00:26:21.873 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:21.873 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:21.873 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2471906' 00:26:21.873 killing process with pid 2471906 00:26:21.873 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2471906 00:26:21.873 Received shutdown signal, test time was about 2.000000 seconds 00:26:21.873 00:26:21.873 Latency(us) 00:26:21.873 [2024-11-18T12:09:19.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.873 [2024-11-18T12:09:19.575Z] =================================================================================================================== 00:26:21.873 [2024-11-18T12:09:19.575Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.873 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2471906 00:26:22.133 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:22.133 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:22.133 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:22.133 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:22.133 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:22.133 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2472582 00:26:22.133 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2472582 /var/tmp/bperf.sock 00:26:22.133 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:22.133 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2472582 ']' 00:26:22.133 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:22.133 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:22.133 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:22.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:22.133 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:22.133 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.133 [2024-11-18 13:09:19.740172] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:26:22.133 [2024-11-18 13:09:19.740221] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472582 ] 00:26:22.133 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:22.133 Zero copy mechanism will not be used. 00:26:22.133 [2024-11-18 13:09:19.816910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.391 [2024-11-18 13:09:19.859594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.391 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:22.391 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:22.391 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:22.391 13:09:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:22.650 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:22.650 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.650 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.650 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.650 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.650 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.909 nvme0n1 00:26:22.909 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:22.909 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.909 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.909 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.909 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:22.909 13:09:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:23.169 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:23.169 Zero copy mechanism will not be used. 00:26:23.169 Running I/O for 2 seconds... 00:26:23.169 [2024-11-18 13:09:20.703447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.703730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.703760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.708530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.708783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.708808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.713278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.713538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.713560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.718282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.718550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.718572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.723240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.723500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.723522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.728886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.729151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.729172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.734543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.734805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.734826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.739839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.740100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.740122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.744761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.745013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.745034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.749862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.750125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.750147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.755019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.755275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.755296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.759927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.760178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.760200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.764714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.764978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.764999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.769589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.769840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.769862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.774650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.774898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.774918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.780265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.780521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.780543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.785604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.785864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.785886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.790947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.791209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.791231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.796082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.796332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.796361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.801450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.801714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.801735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.806672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.806922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.806942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.812118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.812367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.812404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.817704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.169 [2024-11-18 13:09:20.817960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.169 [2024-11-18 13:09:20.817982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.169 [2024-11-18 13:09:20.823180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.170 [2024-11-18 13:09:20.823438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.170 [2024-11-18 13:09:20.823460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.170 [2024-11-18 13:09:20.828522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.170 [2024-11-18 13:09:20.828771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.170 [2024-11-18 13:09:20.828792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.170 [2024-11-18 13:09:20.833300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.170 [2024-11-18 13:09:20.833558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.170 [2024-11-18 13:09:20.833580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.170 [2024-11-18 13:09:20.838279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.170 [2024-11-18 13:09:20.838533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.170 [2024-11-18 13:09:20.838555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.170 [2024-11-18 13:09:20.843282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.170 [2024-11-18 13:09:20.843537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.170 [2024-11-18 13:09:20.843562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.170 [2024-11-18 13:09:20.848836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.170 [2024-11-18 13:09:20.849097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.170 [2024-11-18 13:09:20.849118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.170 [2024-11-18 13:09:20.854388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.170 [2024-11-18 13:09:20.854640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.170 [2024-11-18 13:09:20.854661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.170 [2024-11-18 13:09:20.859864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.170 [2024-11-18 13:09:20.860114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.170 [2024-11-18 13:09:20.860134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.170 [2024-11-18 13:09:20.865115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.170 [2024-11-18 13:09:20.865396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.170 [2024-11-18 13:09:20.865418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.430 [2024-11-18 13:09:20.870416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.430 [2024-11-18 13:09:20.870669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.430 [2024-11-18 13:09:20.870691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.430 [2024-11-18 13:09:20.875442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.430 [2024-11-18 13:09:20.875688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.430 [2024-11-18 13:09:20.875710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.430 [2024-11-18 13:09:20.880761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.430 [2024-11-18 13:09:20.881012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.430 [2024-11-18 13:09:20.881034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.430 [2024-11-18 13:09:20.885858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.430 [2024-11-18 13:09:20.886108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.430 [2024-11-18 13:09:20.886129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.430 [2024-11-18 13:09:20.890925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.891179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.891202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.896620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.896884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.896905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.902854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.903106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.903127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.908125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.908380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.908400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.913010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.913258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.913279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.917902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.918153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.918174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.922653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.922905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.922926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.927612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.927861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.927883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.932332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.932592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.932613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.937166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.937421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.937443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.941874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.942125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.942145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.946630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.946892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.946914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.951415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.951671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.951692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.956149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.956408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.956427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.961632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.961887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.961908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.967044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.967304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.967325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.972101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.972350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.972377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.977047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.977297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.977325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.982445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.982695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.982717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.988170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.988429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.988450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.993511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.993758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.993779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:20.998463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:20.998714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:20.998735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:21.003759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:21.004011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:21.004032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:21.009027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:21.009270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:21.009290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:21.014260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:21.014517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:21.014538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:21.019129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:21.019397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.431 [2024-11-18 13:09:21.019417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.431 [2024-11-18 13:09:21.024448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.431 [2024-11-18 13:09:21.024690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.024711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.029922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.030178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.030199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.035006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.035269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.035290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.040307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.040561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.040582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.046003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.046254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.046275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.051076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.051314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.051335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.056407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.056659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.056680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.063255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.063510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.063532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.069026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.069282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.069302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.073830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.074083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.074104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.078731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.078978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.078999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.083732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.083994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.084015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.088629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.088881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.088901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.093487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.093740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.093761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.098301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.098568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.098590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.103084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.103336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.103364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.108010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.108260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.108281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.113021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.113272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.113296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.118439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.118690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.118710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.432 [2024-11-18 13:09:21.123542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.432 [2024-11-18 13:09:21.123791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.432 [2024-11-18 13:09:21.123812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.128327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.128592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.128615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.133161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.133422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.133443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.137900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.138152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.138173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.142525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.142779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.142800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.147338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.147598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.147619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.152246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.152503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.152524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.157119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.157172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.157190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.162581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.162645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.162663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.168028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.168280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.168302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.172847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.173098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.173119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.178038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.178288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.178308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.182861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.183113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.183134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.187790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.188039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.188060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.192452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.192719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.192740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.197181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.197450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.197474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.693 [2024-11-18 13:09:21.202113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.693 [2024-11-18 13:09:21.202383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.693 [2024-11-18 13:09:21.202404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.206915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.207168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.207189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.211555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.211814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.211834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.216370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.216641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.216662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.221238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.221505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.221525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.225857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.226109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.226130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.230685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.230935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.230956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.235523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.235776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.235798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.240430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.240686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.240706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.246342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.246604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.246625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.251497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.251759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.251781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.256527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.256783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.256804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.261385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.261635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.261656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.266218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.266475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.266496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.271022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.271269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.271290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.275917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.276167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.276188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.280614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.280868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.280889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.285327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.285587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.285608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.290230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.290487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.290507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.295078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.295329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.295349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.299966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.300219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.300241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.304983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.305240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.305262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.309913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.310168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.310189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.314988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.315244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.315265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.319875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.320133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.320154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.324779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.325030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.325055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.329642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.329895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.329917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.334583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.694 [2024-11-18 13:09:21.334839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.694 [2024-11-18 13:09:21.334860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.694 [2024-11-18 13:09:21.339647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.695 [2024-11-18 13:09:21.339914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.695 [2024-11-18 13:09:21.339934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.695 [2024-11-18 13:09:21.344578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.695 [2024-11-18 13:09:21.344834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.695 [2024-11-18 13:09:21.344856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.695 [2024-11-18 13:09:21.349485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.695 [2024-11-18 13:09:21.349737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.695 [2024-11-18 13:09:21.349758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.695 [2024-11-18 13:09:21.354337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.695 [2024-11-18 13:09:21.354592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.695 [2024-11-18 13:09:21.354613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.695 [2024-11-18 13:09:21.359056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.695 [2024-11-18 13:09:21.359317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.695 [2024-11-18 13:09:21.359338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.695 [2024-11-18 13:09:21.363633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.695 [2024-11-18 13:09:21.363889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.695 [2024-11-18 13:09:21.363910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.695 [2024-11-18 13:09:21.368307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.695 [2024-11-18 13:09:21.368563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.695 [2024-11-18 13:09:21.368583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.695 [2024-11-18 13:09:21.373114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.695 [2024-11-18 13:09:21.373380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.695 [2024-11-18 13:09:21.373401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.695 [2024-11-18 13:09:21.378552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.695 [2024-11-18 13:09:21.378803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.695 [2024-11-18 13:09:21.378824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.695 [2024-11-18 13:09:21.383769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.695 [2024-11-18 13:09:21.384019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.695 [2024-11-18 13:09:21.384039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.695 [2024-11-18 13:09:21.388892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.695 [2024-11-18 13:09:21.389156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.695 [2024-11-18 13:09:21.389177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.956 [2024-11-18 13:09:21.393681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.956 [2024-11-18 13:09:21.393930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.956 [2024-11-18 13:09:21.393951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.956 [2024-11-18 13:09:21.398471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.956 [2024-11-18 13:09:21.398720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.956 [2024-11-18 13:09:21.398741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.956 [2024-11-18 13:09:21.403149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.956 [2024-11-18 13:09:21.403403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.956 [2024-11-18 13:09:21.403423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.956 [2024-11-18 13:09:21.407762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.956 [2024-11-18 13:09:21.408011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.956 [2024-11-18 13:09:21.408033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.956 [2024-11-18 13:09:21.412388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.956 [2024-11-18 13:09:21.412636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.956 [2024-11-18 13:09:21.412657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.956 [2024-11-18 13:09:21.417326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.956 [2024-11-18 13:09:21.417585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.956 [2024-11-18 13:09:21.417606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.956 [2024-11-18 13:09:21.423253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.956 [2024-11-18 13:09:21.423504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.956 [2024-11-18 13:09:21.423525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.956 [2024-11-18 13:09:21.429609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.956 [2024-11-18 13:09:21.429857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.956 [2024-11-18 13:09:21.429878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.956 [2024-11-18 13:09:21.435961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.956 [2024-11-18 13:09:21.436208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.956 [2024-11-18 13:09:21.436230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.956 [2024-11-18 13:09:21.442516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.956 [2024-11-18 13:09:21.442765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.956 [2024-11-18 13:09:21.442786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.956 [2024-11-18 13:09:21.448986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.956 [2024-11-18 13:09:21.449236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.956 [2024-11-18 13:09:21.449257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.956 [2024-11-18 13:09:21.455218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.956 [2024-11-18 13:09:21.455472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.956 [2024-11-18 13:09:21.455494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.461808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.462039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.462062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.468071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.468327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.468348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.474728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.474988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.475010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.481056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.481303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.481323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.487683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.487933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.487953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.494041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.494290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.494312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.500552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.500789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.500809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.507042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.507291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.507312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.513503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.513762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.513783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.518936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.519165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.519186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.525022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.525120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.525138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.531440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.531712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.531732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.537962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.538244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.538265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.544219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.544528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.544549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.550794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.551064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.551086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.556765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.557027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.557048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.564203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.564501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.564522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.570946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.571167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.571192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.575560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.575780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.575801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.579878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.580099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.580120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.584165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.584389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.584410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.588479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.588699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.588719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.592779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.592998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.593017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.597091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.597310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.597331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.601402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.601620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.601640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.605699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.605917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.957 [2024-11-18 13:09:21.605937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.957 [2024-11-18 13:09:21.609978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.957 [2024-11-18 13:09:21.610202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.958 [2024-11-18 13:09:21.610222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.958 [2024-11-18 13:09:21.614309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.958 [2024-11-18 13:09:21.614534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.958 [2024-11-18 13:09:21.614554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.958 [2024-11-18 13:09:21.618632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.958 [2024-11-18 13:09:21.618850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.958 [2024-11-18 13:09:21.618870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.958 [2024-11-18 13:09:21.622930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.958 [2024-11-18 13:09:21.623152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.958 [2024-11-18 13:09:21.623172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.958 [2024-11-18 13:09:21.627214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.958 [2024-11-18 13:09:21.627437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.958 [2024-11-18 13:09:21.627458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.958 [2024-11-18 13:09:21.631496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.958 [2024-11-18 13:09:21.631716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.958 [2024-11-18 13:09:21.631736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.958 [2024-11-18 13:09:21.635777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.958 [2024-11-18 13:09:21.635998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.958 [2024-11-18 13:09:21.636019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.958 [2024-11-18 13:09:21.640274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.958 [2024-11-18 13:09:21.640501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.958 [2024-11-18 13:09:21.640522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.958 [2024-11-18 13:09:21.644577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.958 [2024-11-18 13:09:21.644796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.958 [2024-11-18 13:09:21.644818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.958 [2024-11-18 13:09:21.648862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:23.958 [2024-11-18 13:09:21.649083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.958 [2024-11-18 13:09:21.649103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.653178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.653404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.653423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.657513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.657732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.657752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.661850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.662069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.662089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.666158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.666385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.666405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.670457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.670676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.670696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.674741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.674958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.674978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.679045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.679263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.679283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.683313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.683532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.683556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.687609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.687830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.687850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.691933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.692154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.692174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.219 6040.00 IOPS, 755.00 MiB/s [2024-11-18T12:09:21.921Z] [2024-11-18 13:09:21.697015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.697196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.697215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.701058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.701229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.701249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.705693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.705996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.706015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.711515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.711723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.711744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.716928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.717124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.717145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.722537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.722704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.722726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.727579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.727747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.727767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.732223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.732385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.732404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.736996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.737153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.737172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.741679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.741854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.741873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.746869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.219 [2024-11-18 13:09:21.747092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.219 [2024-11-18 13:09:21.747113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.219 [2024-11-18 13:09:21.752863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.753031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.753052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.759480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.759715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.759743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.765751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.766006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.766026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.770782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.770939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.770959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.775197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.775356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.775376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.779615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.779800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.779819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.783619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.783787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.783806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.788109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.788299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.788318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.793385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.793557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.793575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.798868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.799044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.799063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.803920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.804142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.804163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.809034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.809229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.809248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.814421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.814629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.814653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.819668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.819959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.819979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.824873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.825045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.825065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.830069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.830255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.830274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.835734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.836009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.836030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.840899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.841086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.841105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.846564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.846850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.846871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.851874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.852064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.852083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.857852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.858034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.858053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.862769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.862925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.862944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.866981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.867139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.867158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.871150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.871358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.871378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.875391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.875603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.875624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.879524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.879693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.879712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.883714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.220 [2024-11-18 13:09:21.883871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.220 [2024-11-18 13:09:21.883889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.220 [2024-11-18 13:09:21.887674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.221 [2024-11-18 13:09:21.887878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.221 [2024-11-18 13:09:21.887899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.221 [2024-11-18 13:09:21.892809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.221 [2024-11-18 13:09:21.892966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.221 [2024-11-18 13:09:21.892985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.221 [2024-11-18 13:09:21.897852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.221 [2024-11-18 13:09:21.898028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.221 [2024-11-18 13:09:21.898055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.221 [2024-11-18 13:09:21.902284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.221 [2024-11-18 13:09:21.902499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.221 [2024-11-18 13:09:21.902520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.221 [2024-11-18 13:09:21.906536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.221 [2024-11-18 13:09:21.906703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.221 [2024-11-18 13:09:21.906722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.221 [2024-11-18 13:09:21.910641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.221 [2024-11-18 13:09:21.910849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.221 [2024-11-18 13:09:21.910869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.221 [2024-11-18 13:09:21.914708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.221 [2024-11-18 13:09:21.914911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.221 [2024-11-18 13:09:21.914929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.918975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.919189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.919209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.923226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.923407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.923426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.927403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.927604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.927626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.931586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.931750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.931769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.935767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.935932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.935951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.939939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.940139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.940158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.943998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.944154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.944173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.948180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.948347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.948373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.952797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.952969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.952989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.956900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.957125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.957146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.961076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.961245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.961264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.965746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.965899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.965918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.971521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.971674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.971694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.976033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.976201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.976221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.980242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.980408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.980428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.984448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.984608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.984627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.988777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.988931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.988950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.992736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.992898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.992917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:21.996638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:21.996792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:21.996811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:22.000528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:22.000681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:22.000700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:22.004377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:22.004533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:22.004553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:22.008310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:22.008474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:22.008498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:22.012289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:22.012447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:22.012467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:22.016874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:22.017036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.482 [2024-11-18 13:09:22.017055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.482 [2024-11-18 13:09:22.021236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.482 [2024-11-18 13:09:22.021402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.021421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.025323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.025489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.025508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.029379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.029553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.029572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.033469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.033623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.033643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.037407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.037566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.037585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.041417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.041575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.041594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.045868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.046029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.046049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.050350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.050521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.050540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.054401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.054559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.054579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.058428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.058587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.058606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.062663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.062822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.062841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.066725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.066876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.066895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.070749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.070902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.070921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.074754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.074910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.074928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.078716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.078879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.078898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.082746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.082910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.082929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.087688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.087848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.087867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.092240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.092405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.092425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.096259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.096419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.096438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.100268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.100435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.100454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.104199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.104362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.104381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.108144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.108293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.108312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.112605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.112770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.112790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.117224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.117384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.117408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.121381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.121550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.121568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.125340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.125503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.125522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.129306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.129482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.483 [2024-11-18 13:09:22.129501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.483 [2024-11-18 13:09:22.133291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.483 [2024-11-18 13:09:22.133451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.484 [2024-11-18 13:09:22.133470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.484 [2024-11-18 13:09:22.137198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.484 [2024-11-18 13:09:22.137344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.484 [2024-11-18 13:09:22.137371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.484 [2024-11-18 13:09:22.141228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.484 [2024-11-18 13:09:22.141393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.484 [2024-11-18 13:09:22.141412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.484 [2024-11-18 13:09:22.145944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.484 [2024-11-18 13:09:22.146094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.484 [2024-11-18 13:09:22.146114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.484 [2024-11-18 13:09:22.150489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.484 [2024-11-18 13:09:22.150643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.484 [2024-11-18 13:09:22.150662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.484 [2024-11-18 13:09:22.154463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.484 [2024-11-18 13:09:22.154709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.484 [2024-11-18 13:09:22.154728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.484 [2024-11-18 13:09:22.158497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.484 [2024-11-18 13:09:22.158651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.484 [2024-11-18 13:09:22.158671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.484 [2024-11-18 13:09:22.162384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.484 [2024-11-18 13:09:22.162542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.484 [2024-11-18 13:09:22.162561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.484 [2024-11-18 13:09:22.166330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.484 [2024-11-18 13:09:22.166501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.484 [2024-11-18 13:09:22.166520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.484 [2024-11-18 13:09:22.170961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.484 [2024-11-18 13:09:22.171107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.484 [2024-11-18 13:09:22.171127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.484 [2024-11-18 13:09:22.175393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.484 [2024-11-18 13:09:22.175552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.484 [2024-11-18 13:09:22.175571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.745 [2024-11-18 13:09:22.179432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.745 [2024-11-18 13:09:22.179613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.745 [2024-11-18 13:09:22.179632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.745 [2024-11-18 13:09:22.184266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.745 [2024-11-18 13:09:22.184478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.745 [2024-11-18 13:09:22.184497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.745 [2024-11-18 13:09:22.189675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.745 [2024-11-18 13:09:22.189934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.745 [2024-11-18 13:09:22.189955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.745 [2024-11-18 13:09:22.196074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.745 [2024-11-18 13:09:22.196281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.745 [2024-11-18 13:09:22.196301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.745 [2024-11-18 13:09:22.201960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.745 [2024-11-18 13:09:22.202137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.745 [2024-11-18 13:09:22.202157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.745 [2024-11-18 13:09:22.206695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.745 [2024-11-18 13:09:22.206865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.745 [2024-11-18 13:09:22.206884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.745 [2024-11-18 13:09:22.211931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.745 [2024-11-18 13:09:22.212124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.745 [2024-11-18 13:09:22.212144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.745 [2024-11-18 13:09:22.217081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.745 [2024-11-18 13:09:22.217259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.745 [2024-11-18 13:09:22.217279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.745 [2024-11-18 13:09:22.222259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.745 [2024-11-18 13:09:22.222472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.745 [2024-11-18 13:09:22.222491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.745 [2024-11-18 13:09:22.227030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.745 [2024-11-18 13:09:22.227196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.745 [2024-11-18 13:09:22.227216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.745 [2024-11-18 13:09:22.231596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.745 [2024-11-18 13:09:22.231756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.745 [2024-11-18 13:09:22.231775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.745 [2024-11-18 13:09:22.235482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.745 [2024-11-18 13:09:22.235647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.745 [2024-11-18 13:09:22.235667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.745 [2024-11-18 13:09:22.239388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.745 [2024-11-18 13:09:22.239548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.239569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.243266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.243433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.243454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.247125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.247287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.247307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.251014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.251176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.251197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.254874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.255033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.255052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.258792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.258955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.258975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.262671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.262827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.262846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.266511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.266669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.266689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.270310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.270468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.270487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.274106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.274268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.274287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.277910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.278069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.278089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.281679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.281833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.281852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.285477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.285634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.285652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.289261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.289426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.289445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.293068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.293223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.293241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.297019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.297169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.297188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.301601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.301760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.301783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.305914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.306065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.306084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.309971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.310129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.310148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.313975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.314130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.314149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.317978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.318134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.318153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.321796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.321954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.321973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.325713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.325876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.325895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.329625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.329785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.329804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.333609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.333759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.333778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.338279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.338437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.338456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.343103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.343262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.746 [2024-11-18 13:09:22.343283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.746 [2024-11-18 13:09:22.347194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.746 [2024-11-18 13:09:22.347347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.347375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.351096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.351251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.351270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.355080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.355235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.355254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.359009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.359168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.359188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.363507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.363686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.363705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.367957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.368130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.368149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.372692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.372873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.372892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.377666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.377838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.377858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.383240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.383396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.383417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.388746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.388941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.388960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.393788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.393974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.393995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.398824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.399023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.399044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.404255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.404466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.404485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.408668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.408890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.408911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.413301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.413464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.413483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.417287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.417448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.417471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.421239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.421398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.421417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.425262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.425426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.425445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.429710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.429862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.429880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.434245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.434420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.434439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.747 [2024-11-18 13:09:22.439031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:24.747 [2024-11-18 13:09:22.439184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.747 [2024-11-18 13:09:22.439203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.008 [2024-11-18 13:09:22.443561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.008 [2024-11-18 13:09:22.443723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-18 13:09:22.443742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.008 [2024-11-18 13:09:22.447513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.008 [2024-11-18 13:09:22.447663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-18 13:09:22.447683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.008 [2024-11-18 13:09:22.451440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.008 [2024-11-18 13:09:22.451595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-18 13:09:22.451615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.008 [2024-11-18 13:09:22.455389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.008 [2024-11-18 13:09:22.455547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-18 13:09:22.455565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.008 [2024-11-18 13:09:22.459349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.008 [2024-11-18 13:09:22.459508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-18 13:09:22.459527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.008 [2024-11-18 13:09:22.463222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.008 [2024-11-18 13:09:22.463376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-18 13:09:22.463410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.008 [2024-11-18 13:09:22.467120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.008 [2024-11-18 13:09:22.467271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-18 13:09:22.467291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.008 [2024-11-18 13:09:22.470991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.008 [2024-11-18 13:09:22.471147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.008 [2024-11-18 13:09:22.471165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.474918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.475071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.475090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.478973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.479139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.479159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.483002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.483171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.483192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.487015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.487173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.487192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.491084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.491237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.491257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.495844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.496000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.496019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.500587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.500745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.500763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.504655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.504816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.504835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.508719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.508873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.508892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.512698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.512850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.512869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.516588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.516747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.516767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.520546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.520706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.520725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.524470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.524624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.524648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.529011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.529172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.529192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.533524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.533675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.533694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.537810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.537970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.537990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.541740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.541893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.541912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.545649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.545804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.545824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.549536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.549693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.549712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.553448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.553604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.553626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.557393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.557551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.557570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.561235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.561402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.561421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.565009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.565172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.565191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.568834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.568986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.569005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.573209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.573370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.573388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.577966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.578120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.578139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.009 [2024-11-18 13:09:22.582205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.009 [2024-11-18 13:09:22.582370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.009 [2024-11-18 13:09:22.582389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.586134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.586291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.586310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.590048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.590204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.590223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.594058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.594219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.594241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.598006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.598161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.598180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.601927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.602090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.602108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.605942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.606100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.606119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.610018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.610174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.610193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.614316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.614476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.614495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.619174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.619334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.619358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.623906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.624064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.624083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.628200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.628368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.628387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.632158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.632320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.632340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.636082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.636234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.636252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.640319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.640487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.640506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.644439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.644598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.644617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.648371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.648530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.648549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.652388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.652549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.652567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.656735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.656892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.656911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.661197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.661366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.661386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.665719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.665876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.665895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.669667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.669819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.669838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.673657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.673816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.673835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.677574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.677730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.677750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.681477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.681637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.681656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.685302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.685462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.685481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.689136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.689299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.689318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.010 [2024-11-18 13:09:22.692994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.010 [2024-11-18 13:09:22.693149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.010 [2024-11-18 13:09:22.693168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.011 [2024-11-18 13:09:22.696871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18485c0) with pdu=0x2000166fef90 00:26:25.011 [2024-11-18 13:09:22.697026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.011 [2024-11-18 13:09:22.697045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.011 6576.50 IOPS, 822.06 MiB/s 00:26:25.011 Latency(us) 00:26:25.011 [2024-11-18T12:09:22.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.011 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:25.011 nvme0n1 : 2.00 6575.16 821.90 0.00 0.00 2429.35 1624.15 7693.36 00:26:25.011 [2024-11-18T12:09:22.713Z] =================================================================================================================== 00:26:25.011 [2024-11-18T12:09:22.713Z] Total : 6575.16 821.90 0.00 0.00 2429.35 1624.15 7693.36 00:26:25.270 { 00:26:25.270 "results": [ 00:26:25.270 { 00:26:25.270 "job": "nvme0n1", 00:26:25.270 "core_mask": "0x2", 00:26:25.270 "workload": "randwrite", 00:26:25.270 "status": "finished", 00:26:25.270 "queue_depth": 16, 00:26:25.270 "io_size": 131072, 00:26:25.270 "runtime": 2.003449, 00:26:25.270 "iops": 6575.161134623342, 00:26:25.270 "mibps": 821.8951418279178, 00:26:25.270 "io_failed": 0, 00:26:25.270 "io_timeout": 0, 00:26:25.270 "avg_latency_us": 2429.3478082639394, 00:26:25.270 "min_latency_us": 1624.1530434782608, 00:26:25.270 "max_latency_us": 7693.356521739131 00:26:25.270 } 00:26:25.270 ], 00:26:25.270 "core_count": 1 00:26:25.270 } 00:26:25.270 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:25.270 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:25.270 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:25.270 | .driver_specific 00:26:25.270 | .nvme_error 00:26:25.270 | .status_code 00:26:25.270 | .command_transient_transport_error' 00:26:25.270 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:25.270 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 424 > 0 )) 00:26:25.270 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2472582 00:26:25.270 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2472582 ']' 00:26:25.270 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2472582 00:26:25.270 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:25.270 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:25.270 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2472582 00:26:25.530 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:25.530 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:25.530 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2472582' 00:26:25.530 killing process with pid 2472582 00:26:25.530 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2472582 00:26:25.530 Received shutdown signal, test time was about 2.000000 seconds 00:26:25.530 00:26:25.530 Latency(us) 00:26:25.530 [2024-11-18T12:09:23.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.530 [2024-11-18T12:09:23.232Z] =================================================================================================================== 00:26:25.530 [2024-11-18T12:09:23.232Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:25.530 13:09:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2472582 00:26:25.530 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2470741 00:26:25.530 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2470741 ']' 00:26:25.530 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2470741 00:26:25.530 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:25.530 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:25.530 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2470741 00:26:25.530 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:25.530 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:25.530 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2470741' 00:26:25.530 killing process with pid 2470741 00:26:25.530 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2470741 00:26:25.530 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2470741 00:26:25.790 00:26:25.790 real 0m14.173s 00:26:25.790 user 0m27.125s 00:26:25.790 sys 0m4.668s 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.790 ************************************ 00:26:25.790 END TEST nvmf_digest_error 00:26:25.790 ************************************ 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:25.790 rmmod nvme_tcp 00:26:25.790 rmmod nvme_fabrics 00:26:25.790 rmmod nvme_keyring 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2470741 ']' 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2470741 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 2470741 ']' 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 2470741 00:26:25.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2470741) - No such process 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 2470741 is not found' 00:26:25.790 Process with pid 2470741 is not found 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.790 13:09:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:28.327 00:26:28.327 real 0m36.528s 00:26:28.327 user 0m55.739s 00:26:28.327 sys 0m13.725s 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:28.327 ************************************ 00:26:28.327 END TEST nvmf_digest 00:26:28.327 ************************************ 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.327 ************************************ 00:26:28.327 START TEST nvmf_bdevperf 00:26:28.327 ************************************ 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:28.327 * Looking for test storage... 00:26:28.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:28.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.327 --rc genhtml_branch_coverage=1 00:26:28.327 --rc genhtml_function_coverage=1 00:26:28.327 --rc genhtml_legend=1 00:26:28.327 --rc geninfo_all_blocks=1 00:26:28.327 --rc geninfo_unexecuted_blocks=1 00:26:28.327 00:26:28.327 ' 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:28.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.327 --rc genhtml_branch_coverage=1 00:26:28.327 --rc genhtml_function_coverage=1 00:26:28.327 --rc genhtml_legend=1 00:26:28.327 --rc geninfo_all_blocks=1 00:26:28.327 --rc geninfo_unexecuted_blocks=1 00:26:28.327 00:26:28.327 ' 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:28.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.327 --rc genhtml_branch_coverage=1 00:26:28.327 --rc genhtml_function_coverage=1 00:26:28.327 --rc genhtml_legend=1 00:26:28.327 --rc geninfo_all_blocks=1 00:26:28.327 --rc geninfo_unexecuted_blocks=1 00:26:28.327 00:26:28.327 ' 00:26:28.327 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:28.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.328 --rc genhtml_branch_coverage=1 00:26:28.328 --rc genhtml_function_coverage=1 00:26:28.328 --rc genhtml_legend=1 00:26:28.328 --rc geninfo_all_blocks=1 00:26:28.328 --rc geninfo_unexecuted_blocks=1 00:26:28.328 00:26:28.328 ' 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:28.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:28.328 13:09:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:34.901 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:34.901 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:34.901 Found net devices under 0000:86:00.0: cvl_0_0 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:34.901 Found net devices under 0000:86:00.1: cvl_0_1 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.901 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:34.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:26:34.902 00:26:34.902 --- 10.0.0.2 ping statistics --- 00:26:34.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.902 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:34.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:26:34.902 00:26:34.902 --- 10.0.0.1 ping statistics --- 00:26:34.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.902 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2476593 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2476593 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 2476593 ']' 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:34.902 13:09:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:34.902 [2024-11-18 13:09:31.794708] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:26:34.902 [2024-11-18 13:09:31.794760] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.902 [2024-11-18 13:09:31.876659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:34.902 [2024-11-18 13:09:31.919392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.902 [2024-11-18 13:09:31.919432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.902 [2024-11-18 13:09:31.919440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.902 [2024-11-18 13:09:31.919446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.902 [2024-11-18 13:09:31.919451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.902 [2024-11-18 13:09:31.920923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:34.902 [2024-11-18 13:09:31.921028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.902 [2024-11-18 13:09:31.921029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.162 [2024-11-18 13:09:32.675693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.162 Malloc0 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.162 [2024-11-18 13:09:32.735612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:35.162 { 00:26:35.162 "params": { 00:26:35.162 "name": "Nvme$subsystem", 00:26:35.162 "trtype": "$TEST_TRANSPORT", 00:26:35.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.162 "adrfam": "ipv4", 00:26:35.162 "trsvcid": "$NVMF_PORT", 00:26:35.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.162 "hdgst": ${hdgst:-false}, 00:26:35.162 "ddgst": ${ddgst:-false} 00:26:35.162 }, 00:26:35.162 "method": "bdev_nvme_attach_controller" 00:26:35.162 } 00:26:35.162 EOF 00:26:35.162 )") 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:35.162 13:09:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:35.162 "params": { 00:26:35.162 "name": "Nvme1", 00:26:35.162 "trtype": "tcp", 00:26:35.162 "traddr": "10.0.0.2", 00:26:35.162 "adrfam": "ipv4", 00:26:35.162 "trsvcid": "4420", 00:26:35.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:35.162 "hdgst": false, 00:26:35.162 "ddgst": false 00:26:35.162 }, 00:26:35.162 "method": "bdev_nvme_attach_controller" 00:26:35.162 }' 00:26:35.162 [2024-11-18 13:09:32.789700] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:26:35.162 [2024-11-18 13:09:32.789743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476841 ] 00:26:35.421 [2024-11-18 13:09:32.866927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.421 [2024-11-18 13:09:32.908942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.680 Running I/O for 1 seconds... 00:26:36.618 11169.00 IOPS, 43.63 MiB/s 00:26:36.618 Latency(us) 00:26:36.618 [2024-11-18T12:09:34.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.618 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:36.618 Verification LBA range: start 0x0 length 0x4000 00:26:36.618 Nvme1n1 : 1.05 10791.77 42.16 0.00 0.00 11368.55 2407.74 42626.89 00:26:36.618 [2024-11-18T12:09:34.320Z] =================================================================================================================== 00:26:36.618 [2024-11-18T12:09:34.320Z] Total : 10791.77 42.16 0.00 0.00 11368.55 2407.74 42626.89 00:26:36.877 13:09:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2477074 00:26:36.877 13:09:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:36.877 13:09:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:36.877 13:09:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:36.877 13:09:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:36.877 13:09:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:36.877 13:09:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.877 13:09:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.877 { 00:26:36.877 "params": { 00:26:36.877 "name": "Nvme$subsystem", 00:26:36.877 "trtype": "$TEST_TRANSPORT", 00:26:36.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.877 "adrfam": "ipv4", 00:26:36.877 "trsvcid": "$NVMF_PORT", 00:26:36.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.877 "hdgst": ${hdgst:-false}, 00:26:36.877 "ddgst": ${ddgst:-false} 00:26:36.877 }, 00:26:36.877 "method": "bdev_nvme_attach_controller" 00:26:36.877 } 00:26:36.877 EOF 00:26:36.877 )") 00:26:36.877 13:09:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:36.877 13:09:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:36.877 13:09:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:36.877 13:09:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:36.877 "params": { 00:26:36.877 "name": "Nvme1", 00:26:36.877 "trtype": "tcp", 00:26:36.877 "traddr": "10.0.0.2", 00:26:36.877 "adrfam": "ipv4", 00:26:36.877 "trsvcid": "4420", 00:26:36.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.877 "hdgst": false, 00:26:36.877 "ddgst": false 00:26:36.877 }, 00:26:36.877 "method": "bdev_nvme_attach_controller" 00:26:36.877 }' 00:26:36.877 [2024-11-18 13:09:34.441448] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:26:36.877 [2024-11-18 13:09:34.441498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477074 ] 00:26:36.877 [2024-11-18 13:09:34.518883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.877 [2024-11-18 13:09:34.557160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.445 Running I/O for 15 seconds... 00:26:39.317 11129.00 IOPS, 43.47 MiB/s [2024-11-18T12:09:37.590Z] 11091.50 IOPS, 43.33 MiB/s [2024-11-18T12:09:37.590Z] 13:09:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2476593 00:26:39.888 13:09:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:39.888 [2024-11-18 13:09:37.409953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.409990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.888 [2024-11-18 13:09:37.410518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.888 [2024-11-18 13:09:37.410526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.410986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.410993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.411000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.411008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.411014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.411023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.411029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.411037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.411044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.411052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.411058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.411066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.411073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.411082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.411088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.411098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.411104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.411113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.889 [2024-11-18 13:09:37.411120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.889 [2024-11-18 13:09:37.411129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.890 [2024-11-18 13:09:37.411195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.890 [2024-11-18 13:09:37.411532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.890 [2024-11-18 13:09:37.411546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.890 [2024-11-18 13:09:37.411561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.890 [2024-11-18 13:09:37.411576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.890 [2024-11-18 13:09:37.411597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.890 [2024-11-18 13:09:37.411613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.890 [2024-11-18 13:09:37.411628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.890 [2024-11-18 13:09:37.411643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.890 [2024-11-18 13:09:37.411657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.890 [2024-11-18 13:09:37.411672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.890 [2024-11-18 13:09:37.411687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.890 [2024-11-18 13:09:37.411705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.890 [2024-11-18 13:09:37.411721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.890 [2024-11-18 13:09:37.411730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.890 [2024-11-18 13:09:37.411736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.891 [2024-11-18 13:09:37.411780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.411991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.411998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.412006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.412014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.412020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.412028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.891 [2024-11-18 13:09:37.412035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.412042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122fd00 is same with the state(6) to be set 00:26:39.891 [2024-11-18 13:09:37.412050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:39.891 [2024-11-18 13:09:37.412055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:39.891 [2024-11-18 13:09:37.412061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93192 len:8 PRP1 0x0 PRP2 0x0 00:26:39.891 [2024-11-18 13:09:37.412072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.412155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.891 [2024-11-18 13:09:37.412167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.412175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.891 [2024-11-18 13:09:37.412182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.412189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.891 [2024-11-18 13:09:37.412196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.412203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.891 [2024-11-18 13:09:37.412209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.891 [2024-11-18 13:09:37.412215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:39.891 [2024-11-18 13:09:37.415156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:39.891 [2024-11-18 13:09:37.415184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:39.891 [2024-11-18 13:09:37.415808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.891 [2024-11-18 13:09:37.415853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:39.891 [2024-11-18 13:09:37.415878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:39.891 [2024-11-18 13:09:37.416384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:39.891 [2024-11-18 13:09:37.416564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:39.891 [2024-11-18 13:09:37.416573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:39.891 [2024-11-18 13:09:37.416581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:39.891 [2024-11-18 13:09:37.416589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:39.891 [2024-11-18 13:09:37.428450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:39.891 [2024-11-18 13:09:37.428818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.891 [2024-11-18 13:09:37.428836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:39.891 [2024-11-18 13:09:37.428844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:39.891 [2024-11-18 13:09:37.429018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:39.891 [2024-11-18 13:09:37.429194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:39.891 [2024-11-18 13:09:37.429204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:39.891 [2024-11-18 13:09:37.429212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:39.891 [2024-11-18 13:09:37.429220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:39.891 [2024-11-18 13:09:37.441347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:39.891 [2024-11-18 13:09:37.441689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.891 [2024-11-18 13:09:37.441734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:39.891 [2024-11-18 13:09:37.441760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:39.891 [2024-11-18 13:09:37.442273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:39.891 [2024-11-18 13:09:37.442464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:39.891 [2024-11-18 13:09:37.442475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:39.891 [2024-11-18 13:09:37.442482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:39.891 [2024-11-18 13:09:37.442488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:39.891 [2024-11-18 13:09:37.454207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:39.892 [2024-11-18 13:09:37.454652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.892 [2024-11-18 13:09:37.454670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:39.892 [2024-11-18 13:09:37.454678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:39.892 [2024-11-18 13:09:37.454852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:39.892 [2024-11-18 13:09:37.455024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:39.892 [2024-11-18 13:09:37.455033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:39.892 [2024-11-18 13:09:37.455040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:39.892 [2024-11-18 13:09:37.455047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:39.892 [2024-11-18 13:09:37.467043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:39.892 [2024-11-18 13:09:37.467487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.892 [2024-11-18 13:09:37.467504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:39.892 [2024-11-18 13:09:37.467512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:39.892 [2024-11-18 13:09:37.467676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:39.892 [2024-11-18 13:09:37.467840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:39.892 [2024-11-18 13:09:37.467849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:39.892 [2024-11-18 13:09:37.467855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:39.892 [2024-11-18 13:09:37.467862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:39.892 [2024-11-18 13:09:37.479851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:39.892 [2024-11-18 13:09:37.480271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.892 [2024-11-18 13:09:37.480319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:39.892 [2024-11-18 13:09:37.480343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:39.892 [2024-11-18 13:09:37.480948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:39.892 [2024-11-18 13:09:37.481459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:39.892 [2024-11-18 13:09:37.481469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:39.892 [2024-11-18 13:09:37.481475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:39.892 [2024-11-18 13:09:37.481483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:39.892 [2024-11-18 13:09:37.492778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:39.892 [2024-11-18 13:09:37.493135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.892 [2024-11-18 13:09:37.493152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:39.892 [2024-11-18 13:09:37.493160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:39.892 [2024-11-18 13:09:37.493323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:39.892 [2024-11-18 13:09:37.493515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:39.892 [2024-11-18 13:09:37.493525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:39.892 [2024-11-18 13:09:37.493532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:39.892 [2024-11-18 13:09:37.493539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:39.892 [2024-11-18 13:09:37.505689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:39.892 [2024-11-18 13:09:37.506044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.892 [2024-11-18 13:09:37.506061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:39.892 [2024-11-18 13:09:37.506068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:39.892 [2024-11-18 13:09:37.506232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:39.892 [2024-11-18 13:09:37.506401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:39.892 [2024-11-18 13:09:37.506411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:39.892 [2024-11-18 13:09:37.506417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:39.892 [2024-11-18 13:09:37.506424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:39.892 [2024-11-18 13:09:37.518517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:39.892 [2024-11-18 13:09:37.518868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.892 [2024-11-18 13:09:37.518912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:39.892 [2024-11-18 13:09:37.518936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:39.892 [2024-11-18 13:09:37.519406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:39.892 [2024-11-18 13:09:37.519581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:39.892 [2024-11-18 13:09:37.519594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:39.892 [2024-11-18 13:09:37.519600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:39.892 [2024-11-18 13:09:37.519607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:39.892 [2024-11-18 13:09:37.531473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:39.892 [2024-11-18 13:09:37.531823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.892 [2024-11-18 13:09:37.531841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:39.892 [2024-11-18 13:09:37.531849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:39.892 [2024-11-18 13:09:37.532022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:39.892 [2024-11-18 13:09:37.532196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:39.892 [2024-11-18 13:09:37.532206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:39.892 [2024-11-18 13:09:37.532213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:39.892 [2024-11-18 13:09:37.532220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:39.892 [2024-11-18 13:09:37.544348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:39.892 [2024-11-18 13:09:37.544706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.892 [2024-11-18 13:09:37.544723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:39.892 [2024-11-18 13:09:37.544730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:39.892 [2024-11-18 13:09:37.544894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:39.893 [2024-11-18 13:09:37.545058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:39.893 [2024-11-18 13:09:37.545067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:39.893 [2024-11-18 13:09:37.545073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:39.893 [2024-11-18 13:09:37.545080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:39.893 [2024-11-18 13:09:37.557196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:39.893 [2024-11-18 13:09:37.557537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.893 [2024-11-18 13:09:37.557576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:39.893 [2024-11-18 13:09:37.557602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:39.893 [2024-11-18 13:09:37.558183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:39.893 [2024-11-18 13:09:37.558428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:39.893 [2024-11-18 13:09:37.558438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:39.893 [2024-11-18 13:09:37.558445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:39.893 [2024-11-18 13:09:37.558457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:39.893 [2024-11-18 13:09:37.569987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:39.893 [2024-11-18 13:09:37.570412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.893 [2024-11-18 13:09:37.570430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:39.893 [2024-11-18 13:09:37.570438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:39.893 [2024-11-18 13:09:37.570601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:39.893 [2024-11-18 13:09:37.570765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:39.893 [2024-11-18 13:09:37.570775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:39.893 [2024-11-18 13:09:37.570782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:39.893 [2024-11-18 13:09:37.570788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:39.893 [2024-11-18 13:09:37.582966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.153 [2024-11-18 13:09:37.583379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.153 [2024-11-18 13:09:37.583426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.153 [2024-11-18 13:09:37.583450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.153 [2024-11-18 13:09:37.584031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.153 [2024-11-18 13:09:37.584629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.153 [2024-11-18 13:09:37.584657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.153 [2024-11-18 13:09:37.584689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.153 [2024-11-18 13:09:37.584704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.153 [2024-11-18 13:09:37.597983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.153 [2024-11-18 13:09:37.598502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.153 [2024-11-18 13:09:37.598553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.153 [2024-11-18 13:09:37.598577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.153 [2024-11-18 13:09:37.599158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.153 [2024-11-18 13:09:37.599669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.153 [2024-11-18 13:09:37.599683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.153 [2024-11-18 13:09:37.599692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.153 [2024-11-18 13:09:37.599702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.153 [2024-11-18 13:09:37.610973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.153 [2024-11-18 13:09:37.611375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.153 [2024-11-18 13:09:37.611392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.153 [2024-11-18 13:09:37.611400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.153 [2024-11-18 13:09:37.611567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.153 [2024-11-18 13:09:37.611735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.153 [2024-11-18 13:09:37.611745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.153 [2024-11-18 13:09:37.611751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.153 [2024-11-18 13:09:37.611758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.153 [2024-11-18 13:09:37.624030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.154 [2024-11-18 13:09:37.624454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.154 [2024-11-18 13:09:37.624473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.154 [2024-11-18 13:09:37.624481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.154 [2024-11-18 13:09:37.624654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.154 [2024-11-18 13:09:37.624828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.154 [2024-11-18 13:09:37.624838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.154 [2024-11-18 13:09:37.624845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.154 [2024-11-18 13:09:37.624851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.154 [2024-11-18 13:09:37.637106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.154 [2024-11-18 13:09:37.637523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.154 [2024-11-18 13:09:37.637542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.154 [2024-11-18 13:09:37.637551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.154 [2024-11-18 13:09:37.637725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.154 [2024-11-18 13:09:37.637898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.154 [2024-11-18 13:09:37.637907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.154 [2024-11-18 13:09:37.637914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.154 [2024-11-18 13:09:37.637921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.154 [2024-11-18 13:09:37.649899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.154 [2024-11-18 13:09:37.650327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.154 [2024-11-18 13:09:37.650385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.154 [2024-11-18 13:09:37.650411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.154 [2024-11-18 13:09:37.650852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.154 [2024-11-18 13:09:37.651016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.154 [2024-11-18 13:09:37.651024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.154 [2024-11-18 13:09:37.651030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.154 [2024-11-18 13:09:37.651036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.154 [2024-11-18 13:09:37.662797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.154 [2024-11-18 13:09:37.663217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.154 [2024-11-18 13:09:37.663235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.154 [2024-11-18 13:09:37.663243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.154 [2024-11-18 13:09:37.663431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.154 [2024-11-18 13:09:37.663605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.154 [2024-11-18 13:09:37.663615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.154 [2024-11-18 13:09:37.663623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.154 [2024-11-18 13:09:37.663630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.154 [2024-11-18 13:09:37.675881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.154 [2024-11-18 13:09:37.676311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.154 [2024-11-18 13:09:37.676329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.154 [2024-11-18 13:09:37.676338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.154 [2024-11-18 13:09:37.676522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.154 [2024-11-18 13:09:37.676702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.154 [2024-11-18 13:09:37.676712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.154 [2024-11-18 13:09:37.676719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.154 [2024-11-18 13:09:37.676726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.154 [2024-11-18 13:09:37.688936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.154 [2024-11-18 13:09:37.689298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.154 [2024-11-18 13:09:37.689316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.154 [2024-11-18 13:09:37.689324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.154 [2024-11-18 13:09:37.689509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.154 [2024-11-18 13:09:37.689695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.154 [2024-11-18 13:09:37.689707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.154 [2024-11-18 13:09:37.689714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.154 [2024-11-18 13:09:37.689721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.154 [2024-11-18 13:09:37.701885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.154 [2024-11-18 13:09:37.702246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.154 [2024-11-18 13:09:37.702263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.154 [2024-11-18 13:09:37.702270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.154 [2024-11-18 13:09:37.702459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.154 [2024-11-18 13:09:37.702633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.154 [2024-11-18 13:09:37.702643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.154 [2024-11-18 13:09:37.702650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.154 [2024-11-18 13:09:37.702656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.154 [2024-11-18 13:09:37.714699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.154 [2024-11-18 13:09:37.715098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.154 [2024-11-18 13:09:37.715116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.154 [2024-11-18 13:09:37.715124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.154 [2024-11-18 13:09:37.715287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.154 [2024-11-18 13:09:37.715478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.154 [2024-11-18 13:09:37.715487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.154 [2024-11-18 13:09:37.715494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.154 [2024-11-18 13:09:37.715501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.154 [2024-11-18 13:09:37.727654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.154 [2024-11-18 13:09:37.728060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.154 [2024-11-18 13:09:37.728104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.154 [2024-11-18 13:09:37.728128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.154 [2024-11-18 13:09:37.728721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.154 [2024-11-18 13:09:37.729132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.154 [2024-11-18 13:09:37.729142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.154 [2024-11-18 13:09:37.729148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.154 [2024-11-18 13:09:37.729159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.154 [2024-11-18 13:09:37.740469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.154 [2024-11-18 13:09:37.740903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.154 [2024-11-18 13:09:37.740944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.154 [2024-11-18 13:09:37.740970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.154 [2024-11-18 13:09:37.741568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.154 [2024-11-18 13:09:37.741866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.154 [2024-11-18 13:09:37.741876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.154 [2024-11-18 13:09:37.741883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.154 [2024-11-18 13:09:37.741890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.155 [2024-11-18 13:09:37.753421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.155 [2024-11-18 13:09:37.753834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-11-18 13:09:37.753851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.155 [2024-11-18 13:09:37.753859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.155 [2024-11-18 13:09:37.754023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.155 [2024-11-18 13:09:37.754187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.155 [2024-11-18 13:09:37.754197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.155 [2024-11-18 13:09:37.754203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.155 [2024-11-18 13:09:37.754210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.155 [2024-11-18 13:09:37.766342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.155 [2024-11-18 13:09:37.766769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-11-18 13:09:37.766813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.155 [2024-11-18 13:09:37.766836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.155 [2024-11-18 13:09:37.767431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.155 [2024-11-18 13:09:37.768003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.155 [2024-11-18 13:09:37.768012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.155 [2024-11-18 13:09:37.768018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.155 [2024-11-18 13:09:37.768025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.155 [2024-11-18 13:09:37.779186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.155 [2024-11-18 13:09:37.779605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-11-18 13:09:37.779622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.155 [2024-11-18 13:09:37.779630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.155 [2024-11-18 13:09:37.779794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.155 [2024-11-18 13:09:37.779958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.155 [2024-11-18 13:09:37.779967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.155 [2024-11-18 13:09:37.779973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.155 [2024-11-18 13:09:37.779979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.155 [2024-11-18 13:09:37.792119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.155 [2024-11-18 13:09:37.792543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-11-18 13:09:37.792590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.155 [2024-11-18 13:09:37.792614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.155 [2024-11-18 13:09:37.793122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.155 [2024-11-18 13:09:37.793287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.155 [2024-11-18 13:09:37.793297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.155 [2024-11-18 13:09:37.793303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.155 [2024-11-18 13:09:37.793309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.155 [2024-11-18 13:09:37.804984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.155 [2024-11-18 13:09:37.805381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-11-18 13:09:37.805398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.155 [2024-11-18 13:09:37.805406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.155 [2024-11-18 13:09:37.805569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.155 [2024-11-18 13:09:37.805733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.155 [2024-11-18 13:09:37.805742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.155 [2024-11-18 13:09:37.805749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.155 [2024-11-18 13:09:37.805755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.155 [2024-11-18 13:09:37.817879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.155 [2024-11-18 13:09:37.818299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-11-18 13:09:37.818316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.155 [2024-11-18 13:09:37.818324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.155 [2024-11-18 13:09:37.818519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.155 [2024-11-18 13:09:37.818693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.155 [2024-11-18 13:09:37.818703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.155 [2024-11-18 13:09:37.818709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.155 [2024-11-18 13:09:37.818715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.155 [2024-11-18 13:09:37.830744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.155 [2024-11-18 13:09:37.831135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-11-18 13:09:37.831152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.155 [2024-11-18 13:09:37.831160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.155 [2024-11-18 13:09:37.831324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.155 [2024-11-18 13:09:37.831516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.155 [2024-11-18 13:09:37.831526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.155 [2024-11-18 13:09:37.831533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.155 [2024-11-18 13:09:37.831540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.155 [2024-11-18 13:09:37.843622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.155 [2024-11-18 13:09:37.844020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-11-18 13:09:37.844037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.155 [2024-11-18 13:09:37.844045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.155 [2024-11-18 13:09:37.844208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.155 [2024-11-18 13:09:37.844378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.155 [2024-11-18 13:09:37.844404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.155 [2024-11-18 13:09:37.844412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.155 [2024-11-18 13:09:37.844419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.416 [2024-11-18 13:09:37.856687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.416 [2024-11-18 13:09:37.857125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.416 [2024-11-18 13:09:37.857170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.416 [2024-11-18 13:09:37.857194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.416 [2024-11-18 13:09:37.857788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.416 [2024-11-18 13:09:37.858332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.416 [2024-11-18 13:09:37.858349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.416 [2024-11-18 13:09:37.858361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.416 [2024-11-18 13:09:37.858368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.416 [2024-11-18 13:09:37.869496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.416 [2024-11-18 13:09:37.869916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.416 [2024-11-18 13:09:37.869933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.416 [2024-11-18 13:09:37.869941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.416 [2024-11-18 13:09:37.870105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.416 [2024-11-18 13:09:37.870269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.416 [2024-11-18 13:09:37.870278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.416 [2024-11-18 13:09:37.870284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.416 [2024-11-18 13:09:37.870291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.416 [2024-11-18 13:09:37.882322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.416 [2024-11-18 13:09:37.882648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.416 [2024-11-18 13:09:37.882665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.416 [2024-11-18 13:09:37.882673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.416 [2024-11-18 13:09:37.882836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.416 [2024-11-18 13:09:37.883000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.416 [2024-11-18 13:09:37.883009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.416 [2024-11-18 13:09:37.883016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.416 [2024-11-18 13:09:37.883022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.416 9333.33 IOPS, 36.46 MiB/s [2024-11-18T12:09:38.118Z] [2024-11-18 13:09:37.895192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.416 [2024-11-18 13:09:37.895611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.416 [2024-11-18 13:09:37.895629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.416 [2024-11-18 13:09:37.895637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.417 [2024-11-18 13:09:37.895801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.417 [2024-11-18 13:09:37.895965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.417 [2024-11-18 13:09:37.895974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.417 [2024-11-18 13:09:37.895981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.417 [2024-11-18 13:09:37.895991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.417 [2024-11-18 13:09:37.908050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.417 [2024-11-18 13:09:37.908479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.417 [2024-11-18 13:09:37.908527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.417 [2024-11-18 13:09:37.908551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.417 [2024-11-18 13:09:37.909132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.417 [2024-11-18 13:09:37.909345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.417 [2024-11-18 13:09:37.909359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.417 [2024-11-18 13:09:37.909366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.417 [2024-11-18 13:09:37.909372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.417 [2024-11-18 13:09:37.920877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.417 [2024-11-18 13:09:37.921256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.417 [2024-11-18 13:09:37.921274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.417 [2024-11-18 13:09:37.921281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.417 [2024-11-18 13:09:37.921462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.417 [2024-11-18 13:09:37.921636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.417 [2024-11-18 13:09:37.921645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.417 [2024-11-18 13:09:37.921653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.417 [2024-11-18 13:09:37.921659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.417 [2024-11-18 13:09:37.933961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.417 [2024-11-18 13:09:37.934386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.417 [2024-11-18 13:09:37.934404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.417 [2024-11-18 13:09:37.934413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.417 [2024-11-18 13:09:37.934586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.417 [2024-11-18 13:09:37.934759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.417 [2024-11-18 13:09:37.934768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.417 [2024-11-18 13:09:37.934775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.417 [2024-11-18 13:09:37.934783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.417 [2024-11-18 13:09:37.946965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.417 [2024-11-18 13:09:37.947316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.417 [2024-11-18 13:09:37.947333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.417 [2024-11-18 13:09:37.947340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.417 [2024-11-18 13:09:37.947511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.417 [2024-11-18 13:09:37.947676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.417 [2024-11-18 13:09:37.947686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.417 [2024-11-18 13:09:37.947692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.417 [2024-11-18 13:09:37.947698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.417 [2024-11-18 13:09:37.959891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.417 [2024-11-18 13:09:37.960297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.417 [2024-11-18 13:09:37.960315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.417 [2024-11-18 13:09:37.960323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.417 [2024-11-18 13:09:37.960493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.417 [2024-11-18 13:09:37.960657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.417 [2024-11-18 13:09:37.960667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.417 [2024-11-18 13:09:37.960675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.417 [2024-11-18 13:09:37.960681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.417 [2024-11-18 13:09:37.972871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.417 [2024-11-18 13:09:37.973217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.417 [2024-11-18 13:09:37.973234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.417 [2024-11-18 13:09:37.973241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.417 [2024-11-18 13:09:37.973412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.417 [2024-11-18 13:09:37.973577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.417 [2024-11-18 13:09:37.973587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.417 [2024-11-18 13:09:37.973593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.417 [2024-11-18 13:09:37.973599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.417 [2024-11-18 13:09:37.985835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.417 [2024-11-18 13:09:37.986184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.417 [2024-11-18 13:09:37.986202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.417 [2024-11-18 13:09:37.986213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.417 [2024-11-18 13:09:37.986384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.417 [2024-11-18 13:09:37.986550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.417 [2024-11-18 13:09:37.986559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.417 [2024-11-18 13:09:37.986566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.417 [2024-11-18 13:09:37.986572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.417 [2024-11-18 13:09:37.998963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.417 [2024-11-18 13:09:37.999380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.417 [2024-11-18 13:09:37.999425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.417 [2024-11-18 13:09:37.999450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.417 [2024-11-18 13:09:38.000030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.417 [2024-11-18 13:09:38.000618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.417 [2024-11-18 13:09:38.000629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.417 [2024-11-18 13:09:38.000636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.417 [2024-11-18 13:09:38.000643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.417 [2024-11-18 13:09:38.011947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.417 [2024-11-18 13:09:38.012308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.417 [2024-11-18 13:09:38.012326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.417 [2024-11-18 13:09:38.012334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.417 [2024-11-18 13:09:38.012513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.417 [2024-11-18 13:09:38.012695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.417 [2024-11-18 13:09:38.012703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.417 [2024-11-18 13:09:38.012710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.417 [2024-11-18 13:09:38.012716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.417 [2024-11-18 13:09:38.024800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.417 [2024-11-18 13:09:38.025228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.418 [2024-11-18 13:09:38.025272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.418 [2024-11-18 13:09:38.025296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.418 [2024-11-18 13:09:38.025762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.418 [2024-11-18 13:09:38.025938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.418 [2024-11-18 13:09:38.025951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.418 [2024-11-18 13:09:38.025959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.418 [2024-11-18 13:09:38.025967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.418 [2024-11-18 13:09:38.037841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.418 [2024-11-18 13:09:38.038279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.418 [2024-11-18 13:09:38.038325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.418 [2024-11-18 13:09:38.038349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.418 [2024-11-18 13:09:38.038859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.418 [2024-11-18 13:09:38.039025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.418 [2024-11-18 13:09:38.039034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.418 [2024-11-18 13:09:38.039041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.418 [2024-11-18 13:09:38.039047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.418 [2024-11-18 13:09:38.052912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.418 [2024-11-18 13:09:38.053467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.418 [2024-11-18 13:09:38.053491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.418 [2024-11-18 13:09:38.053503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.418 [2024-11-18 13:09:38.053759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.418 [2024-11-18 13:09:38.054016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.418 [2024-11-18 13:09:38.054029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.418 [2024-11-18 13:09:38.054039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.418 [2024-11-18 13:09:38.054050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.418 [2024-11-18 13:09:38.065886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.418 [2024-11-18 13:09:38.066228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.418 [2024-11-18 13:09:38.066246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.418 [2024-11-18 13:09:38.066254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.418 [2024-11-18 13:09:38.066432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.418 [2024-11-18 13:09:38.066619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.418 [2024-11-18 13:09:38.066628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.418 [2024-11-18 13:09:38.066635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.418 [2024-11-18 13:09:38.066645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.418 [2024-11-18 13:09:38.078840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.418 [2024-11-18 13:09:38.079271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.418 [2024-11-18 13:09:38.079290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.418 [2024-11-18 13:09:38.079297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.418 [2024-11-18 13:09:38.079475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.418 [2024-11-18 13:09:38.079659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.418 [2024-11-18 13:09:38.079668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.418 [2024-11-18 13:09:38.079675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.418 [2024-11-18 13:09:38.079681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.418 [2024-11-18 13:09:38.091769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.418 [2024-11-18 13:09:38.092144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.418 [2024-11-18 13:09:38.092162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.418 [2024-11-18 13:09:38.092169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.418 [2024-11-18 13:09:38.092342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.418 [2024-11-18 13:09:38.092520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.418 [2024-11-18 13:09:38.092531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.418 [2024-11-18 13:09:38.092537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.418 [2024-11-18 13:09:38.092544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.418 [2024-11-18 13:09:38.104711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.418 [2024-11-18 13:09:38.105106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.418 [2024-11-18 13:09:38.105123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.418 [2024-11-18 13:09:38.105130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.418 [2024-11-18 13:09:38.105293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.418 [2024-11-18 13:09:38.105463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.418 [2024-11-18 13:09:38.105472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.418 [2024-11-18 13:09:38.105479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.418 [2024-11-18 13:09:38.105486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.679 [2024-11-18 13:09:38.117706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.679 [2024-11-18 13:09:38.118049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.679 [2024-11-18 13:09:38.118066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.679 [2024-11-18 13:09:38.118074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.679 [2024-11-18 13:09:38.118247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.679 [2024-11-18 13:09:38.118428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.679 [2024-11-18 13:09:38.118438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.679 [2024-11-18 13:09:38.118446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.679 [2024-11-18 13:09:38.118453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.679 [2024-11-18 13:09:38.130650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.679 [2024-11-18 13:09:38.131065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.679 [2024-11-18 13:09:38.131109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.679 [2024-11-18 13:09:38.131133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.679 [2024-11-18 13:09:38.131672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.679 [2024-11-18 13:09:38.131847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.679 [2024-11-18 13:09:38.131857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.679 [2024-11-18 13:09:38.131863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.679 [2024-11-18 13:09:38.131870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.679 [2024-11-18 13:09:38.143474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.679 [2024-11-18 13:09:38.143762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.680 [2024-11-18 13:09:38.143779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.680 [2024-11-18 13:09:38.143787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.680 [2024-11-18 13:09:38.143960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.680 [2024-11-18 13:09:38.144133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.680 [2024-11-18 13:09:38.144142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.680 [2024-11-18 13:09:38.144149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.680 [2024-11-18 13:09:38.144155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.680 [2024-11-18 13:09:38.156454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.680 [2024-11-18 13:09:38.156798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.680 [2024-11-18 13:09:38.156815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.680 [2024-11-18 13:09:38.156822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.680 [2024-11-18 13:09:38.156989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.680 [2024-11-18 13:09:38.157154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.680 [2024-11-18 13:09:38.157163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.680 [2024-11-18 13:09:38.157169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.680 [2024-11-18 13:09:38.157175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.680 [2024-11-18 13:09:38.169418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.680 [2024-11-18 13:09:38.169692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.680 [2024-11-18 13:09:38.169709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.680 [2024-11-18 13:09:38.169718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.680 [2024-11-18 13:09:38.169881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.680 [2024-11-18 13:09:38.170044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.680 [2024-11-18 13:09:38.170054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.680 [2024-11-18 13:09:38.170061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.680 [2024-11-18 13:09:38.170067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.680 [2024-11-18 13:09:38.182394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.680 [2024-11-18 13:09:38.182805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.680 [2024-11-18 13:09:38.182821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.680 [2024-11-18 13:09:38.182829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.680 [2024-11-18 13:09:38.183002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.680 [2024-11-18 13:09:38.183175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.680 [2024-11-18 13:09:38.183184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.680 [2024-11-18 13:09:38.183191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.680 [2024-11-18 13:09:38.183197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.680 [2024-11-18 13:09:38.195544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.680 [2024-11-18 13:09:38.195894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.680 [2024-11-18 13:09:38.195938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.680 [2024-11-18 13:09:38.195961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.680 [2024-11-18 13:09:38.196556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.680 [2024-11-18 13:09:38.197141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.680 [2024-11-18 13:09:38.197171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.680 [2024-11-18 13:09:38.197178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.680 [2024-11-18 13:09:38.197185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.680 [2024-11-18 13:09:38.208508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.680 [2024-11-18 13:09:38.208854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.680 [2024-11-18 13:09:38.208871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.680 [2024-11-18 13:09:38.208879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.680 [2024-11-18 13:09:38.209042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.680 [2024-11-18 13:09:38.209206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.680 [2024-11-18 13:09:38.209216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.680 [2024-11-18 13:09:38.209223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.680 [2024-11-18 13:09:38.209229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.680 [2024-11-18 13:09:38.221514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.680 [2024-11-18 13:09:38.221914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.680 [2024-11-18 13:09:38.221931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.680 [2024-11-18 13:09:38.221938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.680 [2024-11-18 13:09:38.222102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.680 [2024-11-18 13:09:38.222265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.680 [2024-11-18 13:09:38.222275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.680 [2024-11-18 13:09:38.222281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.680 [2024-11-18 13:09:38.222287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.680 [2024-11-18 13:09:38.234525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.680 [2024-11-18 13:09:38.234797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.680 [2024-11-18 13:09:38.234814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.680 [2024-11-18 13:09:38.234821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.680 [2024-11-18 13:09:38.234985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.680 [2024-11-18 13:09:38.235149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.680 [2024-11-18 13:09:38.235158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.680 [2024-11-18 13:09:38.235164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.680 [2024-11-18 13:09:38.235174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.680 [2024-11-18 13:09:38.247477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.680 [2024-11-18 13:09:38.247752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.680 [2024-11-18 13:09:38.247769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.680 [2024-11-18 13:09:38.247776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.680 [2024-11-18 13:09:38.247940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.680 [2024-11-18 13:09:38.248104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.680 [2024-11-18 13:09:38.248113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.680 [2024-11-18 13:09:38.248120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.680 [2024-11-18 13:09:38.248126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.680 [2024-11-18 13:09:38.260349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.680 [2024-11-18 13:09:38.260695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.680 [2024-11-18 13:09:38.260739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.680 [2024-11-18 13:09:38.260762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.680 [2024-11-18 13:09:38.261301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.680 [2024-11-18 13:09:38.261492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.680 [2024-11-18 13:09:38.261502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.680 [2024-11-18 13:09:38.261509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.680 [2024-11-18 13:09:38.261516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.680 [2024-11-18 13:09:38.273267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.680 [2024-11-18 13:09:38.273574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.681 [2024-11-18 13:09:38.273592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.681 [2024-11-18 13:09:38.273600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.681 [2024-11-18 13:09:38.273774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.681 [2024-11-18 13:09:38.273946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.681 [2024-11-18 13:09:38.273955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.681 [2024-11-18 13:09:38.273962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.681 [2024-11-18 13:09:38.273968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.681 [2024-11-18 13:09:38.286272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.681 [2024-11-18 13:09:38.286566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.681 [2024-11-18 13:09:38.286583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.681 [2024-11-18 13:09:38.286591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.681 [2024-11-18 13:09:38.286764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.681 [2024-11-18 13:09:38.286938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.681 [2024-11-18 13:09:38.286947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.681 [2024-11-18 13:09:38.286954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.681 [2024-11-18 13:09:38.286960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.681 [2024-11-18 13:09:38.299220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.681 [2024-11-18 13:09:38.299621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.681 [2024-11-18 13:09:38.299639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.681 [2024-11-18 13:09:38.299647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.681 [2024-11-18 13:09:38.299819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.681 [2024-11-18 13:09:38.299994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.681 [2024-11-18 13:09:38.300003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.681 [2024-11-18 13:09:38.300011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.681 [2024-11-18 13:09:38.300018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.681 [2024-11-18 13:09:38.312160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.681 [2024-11-18 13:09:38.312584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.681 [2024-11-18 13:09:38.312601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.681 [2024-11-18 13:09:38.312609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.681 [2024-11-18 13:09:38.312772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.681 [2024-11-18 13:09:38.312936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.681 [2024-11-18 13:09:38.312945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.681 [2024-11-18 13:09:38.312952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.681 [2024-11-18 13:09:38.312958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.681 [2024-11-18 13:09:38.325095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.681 [2024-11-18 13:09:38.325486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.681 [2024-11-18 13:09:38.325517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.681 [2024-11-18 13:09:38.325525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.681 [2024-11-18 13:09:38.325703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.681 [2024-11-18 13:09:38.325875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.681 [2024-11-18 13:09:38.325885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.681 [2024-11-18 13:09:38.325891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.681 [2024-11-18 13:09:38.325898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.681 [2024-11-18 13:09:38.338065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.681 [2024-11-18 13:09:38.338422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.681 [2024-11-18 13:09:38.338441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.681 [2024-11-18 13:09:38.338449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.681 [2024-11-18 13:09:38.338623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.681 [2024-11-18 13:09:38.338796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.681 [2024-11-18 13:09:38.338806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.681 [2024-11-18 13:09:38.338813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.681 [2024-11-18 13:09:38.338819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.681 [2024-11-18 13:09:38.350927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.681 [2024-11-18 13:09:38.351375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.681 [2024-11-18 13:09:38.351394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.681 [2024-11-18 13:09:38.351402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.681 [2024-11-18 13:09:38.351576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.681 [2024-11-18 13:09:38.351749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.681 [2024-11-18 13:09:38.351758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.681 [2024-11-18 13:09:38.351765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.681 [2024-11-18 13:09:38.351772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.681 [2024-11-18 13:09:38.363874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.681 [2024-11-18 13:09:38.364294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.681 [2024-11-18 13:09:38.364312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.681 [2024-11-18 13:09:38.364319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.681 [2024-11-18 13:09:38.364500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.681 [2024-11-18 13:09:38.364683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.681 [2024-11-18 13:09:38.364695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.681 [2024-11-18 13:09:38.364701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.681 [2024-11-18 13:09:38.364708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.942 [2024-11-18 13:09:38.376986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.942 [2024-11-18 13:09:38.377474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.942 [2024-11-18 13:09:38.377492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.942 [2024-11-18 13:09:38.377501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.942 [2024-11-18 13:09:38.377675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.942 [2024-11-18 13:09:38.377848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.942 [2024-11-18 13:09:38.377858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.942 [2024-11-18 13:09:38.377864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.942 [2024-11-18 13:09:38.377871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.942 [2024-11-18 13:09:38.390004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.942 [2024-11-18 13:09:38.390434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.942 [2024-11-18 13:09:38.390480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.942 [2024-11-18 13:09:38.390503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.942 [2024-11-18 13:09:38.391086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.942 [2024-11-18 13:09:38.391427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.942 [2024-11-18 13:09:38.391437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.942 [2024-11-18 13:09:38.391444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.942 [2024-11-18 13:09:38.391451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.942 [2024-11-18 13:09:38.402840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.942 [2024-11-18 13:09:38.403252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.942 [2024-11-18 13:09:38.403288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.942 [2024-11-18 13:09:38.403314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.942 [2024-11-18 13:09:38.403908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.942 [2024-11-18 13:09:38.404506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.942 [2024-11-18 13:09:38.404533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.942 [2024-11-18 13:09:38.404555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.942 [2024-11-18 13:09:38.404582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.942 [2024-11-18 13:09:38.415736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.942 [2024-11-18 13:09:38.416148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.942 [2024-11-18 13:09:38.416191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.942 [2024-11-18 13:09:38.416213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.942 [2024-11-18 13:09:38.416703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.942 [2024-11-18 13:09:38.416878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.942 [2024-11-18 13:09:38.416888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.942 [2024-11-18 13:09:38.416895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.942 [2024-11-18 13:09:38.416902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.942 [2024-11-18 13:09:38.428575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.942 [2024-11-18 13:09:38.428996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.942 [2024-11-18 13:09:38.429014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.942 [2024-11-18 13:09:38.429021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.942 [2024-11-18 13:09:38.429183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.942 [2024-11-18 13:09:38.429347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.942 [2024-11-18 13:09:38.429363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.942 [2024-11-18 13:09:38.429369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.942 [2024-11-18 13:09:38.429376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.943 [2024-11-18 13:09:38.441502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.943 [2024-11-18 13:09:38.441945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.943 [2024-11-18 13:09:38.441964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.943 [2024-11-18 13:09:38.441973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.943 [2024-11-18 13:09:38.442136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.943 [2024-11-18 13:09:38.442300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.943 [2024-11-18 13:09:38.442310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.943 [2024-11-18 13:09:38.442319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.943 [2024-11-18 13:09:38.442327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.943 [2024-11-18 13:09:38.454612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.943 [2024-11-18 13:09:38.455049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.943 [2024-11-18 13:09:38.455067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.943 [2024-11-18 13:09:38.455076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.943 [2024-11-18 13:09:38.455254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.943 [2024-11-18 13:09:38.455439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.943 [2024-11-18 13:09:38.455450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.943 [2024-11-18 13:09:38.455456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.943 [2024-11-18 13:09:38.455463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.943 [2024-11-18 13:09:38.467632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.943 [2024-11-18 13:09:38.468041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.943 [2024-11-18 13:09:38.468059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.943 [2024-11-18 13:09:38.468066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.943 [2024-11-18 13:09:38.468230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.943 [2024-11-18 13:09:38.468400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.943 [2024-11-18 13:09:38.468410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.943 [2024-11-18 13:09:38.468416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.943 [2024-11-18 13:09:38.468423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.943 [2024-11-18 13:09:38.480616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.943 [2024-11-18 13:09:38.481040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.943 [2024-11-18 13:09:38.481057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.943 [2024-11-18 13:09:38.481065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.943 [2024-11-18 13:09:38.481229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.943 [2024-11-18 13:09:38.481415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.943 [2024-11-18 13:09:38.481425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.943 [2024-11-18 13:09:38.481433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.943 [2024-11-18 13:09:38.481440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.943 [2024-11-18 13:09:38.493552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.943 [2024-11-18 13:09:38.493965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.943 [2024-11-18 13:09:38.494010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.943 [2024-11-18 13:09:38.494034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.943 [2024-11-18 13:09:38.494640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.943 [2024-11-18 13:09:38.494974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.943 [2024-11-18 13:09:38.494984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.943 [2024-11-18 13:09:38.494990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.943 [2024-11-18 13:09:38.494998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.943 [2024-11-18 13:09:38.506457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.943 [2024-11-18 13:09:38.506822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.943 [2024-11-18 13:09:38.506867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.943 [2024-11-18 13:09:38.506891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.943 [2024-11-18 13:09:38.507485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.943 [2024-11-18 13:09:38.507979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.943 [2024-11-18 13:09:38.507988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.943 [2024-11-18 13:09:38.507995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.943 [2024-11-18 13:09:38.508001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.943 [2024-11-18 13:09:38.519366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.943 [2024-11-18 13:09:38.519795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.943 [2024-11-18 13:09:38.519839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.943 [2024-11-18 13:09:38.519862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.943 [2024-11-18 13:09:38.520457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.943 [2024-11-18 13:09:38.520927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.943 [2024-11-18 13:09:38.520936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.943 [2024-11-18 13:09:38.520942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.943 [2024-11-18 13:09:38.520949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.943 [2024-11-18 13:09:38.532260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.943 [2024-11-18 13:09:38.532661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.943 [2024-11-18 13:09:38.532679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.943 [2024-11-18 13:09:38.532687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.943 [2024-11-18 13:09:38.532851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.943 [2024-11-18 13:09:38.533015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.943 [2024-11-18 13:09:38.533027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.943 [2024-11-18 13:09:38.533034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.943 [2024-11-18 13:09:38.533040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.943 [2024-11-18 13:09:38.545114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.943 [2024-11-18 13:09:38.545553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.943 [2024-11-18 13:09:38.545597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.943 [2024-11-18 13:09:38.545621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.943 [2024-11-18 13:09:38.546138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.943 [2024-11-18 13:09:38.546313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.943 [2024-11-18 13:09:38.546322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.943 [2024-11-18 13:09:38.546329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.943 [2024-11-18 13:09:38.546335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.943 [2024-11-18 13:09:38.558197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.943 [2024-11-18 13:09:38.558553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.943 [2024-11-18 13:09:38.558571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.943 [2024-11-18 13:09:38.558580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.943 [2024-11-18 13:09:38.558752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.943 [2024-11-18 13:09:38.558926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.943 [2024-11-18 13:09:38.558935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.944 [2024-11-18 13:09:38.558942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.944 [2024-11-18 13:09:38.558949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.944 [2024-11-18 13:09:38.571029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.944 [2024-11-18 13:09:38.571478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.944 [2024-11-18 13:09:38.571495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.944 [2024-11-18 13:09:38.571503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.944 [2024-11-18 13:09:38.571680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.944 [2024-11-18 13:09:38.571844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.944 [2024-11-18 13:09:38.571854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.944 [2024-11-18 13:09:38.571860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.944 [2024-11-18 13:09:38.571871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.944 [2024-11-18 13:09:38.583953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.944 [2024-11-18 13:09:38.584384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.944 [2024-11-18 13:09:38.584402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.944 [2024-11-18 13:09:38.584410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.944 [2024-11-18 13:09:38.584583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.944 [2024-11-18 13:09:38.584760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.944 [2024-11-18 13:09:38.584769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.944 [2024-11-18 13:09:38.584775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.944 [2024-11-18 13:09:38.584782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.944 [2024-11-18 13:09:38.596815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.944 [2024-11-18 13:09:38.597153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.944 [2024-11-18 13:09:38.597170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.944 [2024-11-18 13:09:38.597177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.944 [2024-11-18 13:09:38.597341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.944 [2024-11-18 13:09:38.597511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.944 [2024-11-18 13:09:38.597521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.944 [2024-11-18 13:09:38.597527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.944 [2024-11-18 13:09:38.597534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.944 [2024-11-18 13:09:38.609751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.944 [2024-11-18 13:09:38.610076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.944 [2024-11-18 13:09:38.610092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.944 [2024-11-18 13:09:38.610100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.944 [2024-11-18 13:09:38.610264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.944 [2024-11-18 13:09:38.610434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.944 [2024-11-18 13:09:38.610443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.944 [2024-11-18 13:09:38.610450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.944 [2024-11-18 13:09:38.610456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.944 [2024-11-18 13:09:38.622783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.944 [2024-11-18 13:09:38.623228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.944 [2024-11-18 13:09:38.623272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.944 [2024-11-18 13:09:38.623296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.944 [2024-11-18 13:09:38.623889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.944 [2024-11-18 13:09:38.624249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.944 [2024-11-18 13:09:38.624258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.944 [2024-11-18 13:09:38.624264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.944 [2024-11-18 13:09:38.624270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.944 [2024-11-18 13:09:38.635727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.944 [2024-11-18 13:09:38.636158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.944 [2024-11-18 13:09:38.636207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:40.944 [2024-11-18 13:09:38.636231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:40.944 [2024-11-18 13:09:38.636824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:40.944 [2024-11-18 13:09:38.637479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.944 [2024-11-18 13:09:38.637491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.944 [2024-11-18 13:09:38.637499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.944 [2024-11-18 13:09:38.637505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.205 [2024-11-18 13:09:38.648753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.205 [2024-11-18 13:09:38.649185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.205 [2024-11-18 13:09:38.649231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.205 [2024-11-18 13:09:38.649256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.205 [2024-11-18 13:09:38.649849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.205 [2024-11-18 13:09:38.650325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.205 [2024-11-18 13:09:38.650334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.205 [2024-11-18 13:09:38.650341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.205 [2024-11-18 13:09:38.650347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.205 [2024-11-18 13:09:38.661649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.205 [2024-11-18 13:09:38.662086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.205 [2024-11-18 13:09:38.662103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.205 [2024-11-18 13:09:38.662111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.205 [2024-11-18 13:09:38.662281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.205 [2024-11-18 13:09:38.662470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.205 [2024-11-18 13:09:38.662481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.205 [2024-11-18 13:09:38.662488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.205 [2024-11-18 13:09:38.662495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.205 [2024-11-18 13:09:38.674570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.205 [2024-11-18 13:09:38.675002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.205 [2024-11-18 13:09:38.675047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.205 [2024-11-18 13:09:38.675071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.205 [2024-11-18 13:09:38.675660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.205 [2024-11-18 13:09:38.676249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.205 [2024-11-18 13:09:38.676258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.205 [2024-11-18 13:09:38.676264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.205 [2024-11-18 13:09:38.676270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.205 [2024-11-18 13:09:38.687423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.205 [2024-11-18 13:09:38.687834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.205 [2024-11-18 13:09:38.687852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.205 [2024-11-18 13:09:38.687859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.205 [2024-11-18 13:09:38.688023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.205 [2024-11-18 13:09:38.688187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.205 [2024-11-18 13:09:38.688196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.205 [2024-11-18 13:09:38.688202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.205 [2024-11-18 13:09:38.688208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.205 [2024-11-18 13:09:38.700250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.205 [2024-11-18 13:09:38.700703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.205 [2024-11-18 13:09:38.700721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.205 [2024-11-18 13:09:38.700729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.205 [2024-11-18 13:09:38.700907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.205 [2024-11-18 13:09:38.701085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.205 [2024-11-18 13:09:38.701098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.205 [2024-11-18 13:09:38.701106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.205 [2024-11-18 13:09:38.701113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.205 [2024-11-18 13:09:38.713424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.205 [2024-11-18 13:09:38.713807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.205 [2024-11-18 13:09:38.713852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.205 [2024-11-18 13:09:38.713876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.205 [2024-11-18 13:09:38.714324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.205 [2024-11-18 13:09:38.714505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.205 [2024-11-18 13:09:38.714515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.205 [2024-11-18 13:09:38.714522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.205 [2024-11-18 13:09:38.714528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.205 [2024-11-18 13:09:38.726240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.205 [2024-11-18 13:09:38.726588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.205 [2024-11-18 13:09:38.726605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.205 [2024-11-18 13:09:38.726613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.205 [2024-11-18 13:09:38.726776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.205 [2024-11-18 13:09:38.726939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.205 [2024-11-18 13:09:38.726948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.205 [2024-11-18 13:09:38.726955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.205 [2024-11-18 13:09:38.726961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.205 [2024-11-18 13:09:38.739053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.205 [2024-11-18 13:09:38.739482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.205 [2024-11-18 13:09:38.739528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.205 [2024-11-18 13:09:38.739551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.205 [2024-11-18 13:09:38.740143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.205 [2024-11-18 13:09:38.740308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.205 [2024-11-18 13:09:38.740317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.205 [2024-11-18 13:09:38.740324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.205 [2024-11-18 13:09:38.740333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.205 [2024-11-18 13:09:38.751936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.205 [2024-11-18 13:09:38.752271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.205 [2024-11-18 13:09:38.752288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.206 [2024-11-18 13:09:38.752296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.206 [2024-11-18 13:09:38.752486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.206 [2024-11-18 13:09:38.752659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.206 [2024-11-18 13:09:38.752669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.206 [2024-11-18 13:09:38.752676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.206 [2024-11-18 13:09:38.752683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.206 [2024-11-18 13:09:38.764733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.206 [2024-11-18 13:09:38.765166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.206 [2024-11-18 13:09:38.765211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.206 [2024-11-18 13:09:38.765234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.206 [2024-11-18 13:09:38.765709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.206 [2024-11-18 13:09:38.765884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.206 [2024-11-18 13:09:38.765894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.206 [2024-11-18 13:09:38.765900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.206 [2024-11-18 13:09:38.765907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.206 [2024-11-18 13:09:38.777645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.206 [2024-11-18 13:09:38.777970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.206 [2024-11-18 13:09:38.777987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.206 [2024-11-18 13:09:38.777995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.206 [2024-11-18 13:09:38.778159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.206 [2024-11-18 13:09:38.778324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.206 [2024-11-18 13:09:38.778333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.206 [2024-11-18 13:09:38.778339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.206 [2024-11-18 13:09:38.778345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.206 [2024-11-18 13:09:38.790446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.206 [2024-11-18 13:09:38.790869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.206 [2024-11-18 13:09:38.790885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.206 [2024-11-18 13:09:38.790893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.206 [2024-11-18 13:09:38.791055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.206 [2024-11-18 13:09:38.791218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.206 [2024-11-18 13:09:38.791227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.206 [2024-11-18 13:09:38.791234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.206 [2024-11-18 13:09:38.791240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.206 [2024-11-18 13:09:38.803372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.206 [2024-11-18 13:09:38.803790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.206 [2024-11-18 13:09:38.803807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.206 [2024-11-18 13:09:38.803815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.206 [2024-11-18 13:09:38.803978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.206 [2024-11-18 13:09:38.804142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.206 [2024-11-18 13:09:38.804151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.206 [2024-11-18 13:09:38.804157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.206 [2024-11-18 13:09:38.804163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.206 [2024-11-18 13:09:38.816203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.206 [2024-11-18 13:09:38.816624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.206 [2024-11-18 13:09:38.816641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.206 [2024-11-18 13:09:38.816649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.206 [2024-11-18 13:09:38.816814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.206 [2024-11-18 13:09:38.816978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.206 [2024-11-18 13:09:38.816987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.206 [2024-11-18 13:09:38.816994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.206 [2024-11-18 13:09:38.817000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.206 [2024-11-18 13:09:38.829122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.206 [2024-11-18 13:09:38.829544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.206 [2024-11-18 13:09:38.829561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.206 [2024-11-18 13:09:38.829569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.206 [2024-11-18 13:09:38.829736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.206 [2024-11-18 13:09:38.829900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.206 [2024-11-18 13:09:38.829909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.206 [2024-11-18 13:09:38.829915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.206 [2024-11-18 13:09:38.829922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.206 [2024-11-18 13:09:38.841947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.206 [2024-11-18 13:09:38.842350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.206 [2024-11-18 13:09:38.842406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.206 [2024-11-18 13:09:38.842430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.206 [2024-11-18 13:09:38.842910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.206 [2024-11-18 13:09:38.843074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.206 [2024-11-18 13:09:38.843083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.206 [2024-11-18 13:09:38.843090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.206 [2024-11-18 13:09:38.843097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.206 [2024-11-18 13:09:38.854834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.206 [2024-11-18 13:09:38.855249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.206 [2024-11-18 13:09:38.855265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.206 [2024-11-18 13:09:38.855273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.206 [2024-11-18 13:09:38.855460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.206 [2024-11-18 13:09:38.855633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.206 [2024-11-18 13:09:38.855643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.206 [2024-11-18 13:09:38.855650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.206 [2024-11-18 13:09:38.855657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.206 [2024-11-18 13:09:38.867687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.206 [2024-11-18 13:09:38.868107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.206 [2024-11-18 13:09:38.868123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.206 [2024-11-18 13:09:38.868131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.206 [2024-11-18 13:09:38.868294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.206 [2024-11-18 13:09:38.868481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.206 [2024-11-18 13:09:38.868495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.206 [2024-11-18 13:09:38.868501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.207 [2024-11-18 13:09:38.868508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.207 [2024-11-18 13:09:38.880605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.207 [2024-11-18 13:09:38.881028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.207 [2024-11-18 13:09:38.881062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.207 [2024-11-18 13:09:38.881086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.207 [2024-11-18 13:09:38.881689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.207 [2024-11-18 13:09:38.881865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.207 [2024-11-18 13:09:38.881875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.207 [2024-11-18 13:09:38.881881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.207 [2024-11-18 13:09:38.881888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.207 7000.00 IOPS, 27.34 MiB/s [2024-11-18T12:09:38.909Z] [2024-11-18 13:09:38.893432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.207 [2024-11-18 13:09:38.893855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.207 [2024-11-18 13:09:38.893873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.207 [2024-11-18 13:09:38.893880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.207 [2024-11-18 13:09:38.894044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.207 [2024-11-18 13:09:38.894207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.207 [2024-11-18 13:09:38.894217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.207 [2024-11-18 13:09:38.894223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.207 [2024-11-18 13:09:38.894229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.467 [2024-11-18 13:09:38.906278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.467 [2024-11-18 13:09:38.906702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-11-18 13:09:38.906746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.467 [2024-11-18 13:09:38.906770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.467 [2024-11-18 13:09:38.907350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.467 [2024-11-18 13:09:38.907948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.467 [2024-11-18 13:09:38.907966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.467 [2024-11-18 13:09:38.907982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.467 [2024-11-18 13:09:38.908001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.467 [2024-11-18 13:09:38.921232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.467 [2024-11-18 13:09:38.921742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-11-18 13:09:38.921787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.467 [2024-11-18 13:09:38.921811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.467 [2024-11-18 13:09:38.922403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.467 [2024-11-18 13:09:38.922976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.467 [2024-11-18 13:09:38.922988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.467 [2024-11-18 13:09:38.922998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.467 [2024-11-18 13:09:38.923008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.467 [2024-11-18 13:09:38.934152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.467 [2024-11-18 13:09:38.934497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-11-18 13:09:38.934515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.467 [2024-11-18 13:09:38.934523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.467 [2024-11-18 13:09:38.934691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.467 [2024-11-18 13:09:38.934860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.467 [2024-11-18 13:09:38.934869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.467 [2024-11-18 13:09:38.934876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.467 [2024-11-18 13:09:38.934883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.467 [2024-11-18 13:09:38.946969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.467 [2024-11-18 13:09:38.947388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-11-18 13:09:38.947405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.467 [2024-11-18 13:09:38.947412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.467 [2024-11-18 13:09:38.947576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.467 [2024-11-18 13:09:38.947739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.467 [2024-11-18 13:09:38.947748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.467 [2024-11-18 13:09:38.947755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.467 [2024-11-18 13:09:38.947761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.467 [2024-11-18 13:09:38.959858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.467 [2024-11-18 13:09:38.960320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-11-18 13:09:38.960376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.467 [2024-11-18 13:09:38.960402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.467 [2024-11-18 13:09:38.960901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.467 [2024-11-18 13:09:38.961075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.467 [2024-11-18 13:09:38.961085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.467 [2024-11-18 13:09:38.961092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.467 [2024-11-18 13:09:38.961099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.467 [2024-11-18 13:09:38.973005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.467 [2024-11-18 13:09:38.973437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-11-18 13:09:38.973456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.467 [2024-11-18 13:09:38.973464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.467 [2024-11-18 13:09:38.973652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.467 [2024-11-18 13:09:38.973826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.467 [2024-11-18 13:09:38.973836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.467 [2024-11-18 13:09:38.973842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.468 [2024-11-18 13:09:38.973849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.468 [2024-11-18 13:09:38.986041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.468 [2024-11-18 13:09:38.986398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-11-18 13:09:38.986416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.468 [2024-11-18 13:09:38.986424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.468 [2024-11-18 13:09:38.986587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.468 [2024-11-18 13:09:38.986751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.468 [2024-11-18 13:09:38.986759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.468 [2024-11-18 13:09:38.986766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.468 [2024-11-18 13:09:38.986772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.468 [2024-11-18 13:09:38.999053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.468 [2024-11-18 13:09:38.999483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-11-18 13:09:38.999528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.468 [2024-11-18 13:09:38.999559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.468 [2024-11-18 13:09:39.000027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.468 [2024-11-18 13:09:39.000192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.468 [2024-11-18 13:09:39.000202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.468 [2024-11-18 13:09:39.000208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.468 [2024-11-18 13:09:39.000214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.468 [2024-11-18 13:09:39.011882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.468 [2024-11-18 13:09:39.012302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-11-18 13:09:39.012345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.468 [2024-11-18 13:09:39.012383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.468 [2024-11-18 13:09:39.012907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.468 [2024-11-18 13:09:39.013072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.468 [2024-11-18 13:09:39.013081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.468 [2024-11-18 13:09:39.013088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.468 [2024-11-18 13:09:39.013094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.468 [2024-11-18 13:09:39.024853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.468 [2024-11-18 13:09:39.025276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-11-18 13:09:39.025320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.468 [2024-11-18 13:09:39.025344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.468 [2024-11-18 13:09:39.025911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.468 [2024-11-18 13:09:39.026077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.468 [2024-11-18 13:09:39.026086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.468 [2024-11-18 13:09:39.026093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.468 [2024-11-18 13:09:39.026101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.468 [2024-11-18 13:09:39.037776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.468 [2024-11-18 13:09:39.038123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-11-18 13:09:39.038139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.468 [2024-11-18 13:09:39.038146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.468 [2024-11-18 13:09:39.038310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.468 [2024-11-18 13:09:39.038500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.468 [2024-11-18 13:09:39.038513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.468 [2024-11-18 13:09:39.038520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.468 [2024-11-18 13:09:39.038527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.468 [2024-11-18 13:09:39.050589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.468 [2024-11-18 13:09:39.050986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-11-18 13:09:39.051003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.468 [2024-11-18 13:09:39.051011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.468 [2024-11-18 13:09:39.051175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.468 [2024-11-18 13:09:39.051338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.468 [2024-11-18 13:09:39.051348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.468 [2024-11-18 13:09:39.051359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.468 [2024-11-18 13:09:39.051365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.468 [2024-11-18 13:09:39.063440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.468 [2024-11-18 13:09:39.063699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-11-18 13:09:39.063715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.468 [2024-11-18 13:09:39.063723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.468 [2024-11-18 13:09:39.063886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.468 [2024-11-18 13:09:39.064050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.468 [2024-11-18 13:09:39.064059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.468 [2024-11-18 13:09:39.064065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.468 [2024-11-18 13:09:39.064071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.468 [2024-11-18 13:09:39.076261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.468 [2024-11-18 13:09:39.076699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-11-18 13:09:39.076745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.468 [2024-11-18 13:09:39.076768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.468 [2024-11-18 13:09:39.077305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.468 [2024-11-18 13:09:39.077497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.468 [2024-11-18 13:09:39.077507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.468 [2024-11-18 13:09:39.077514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.468 [2024-11-18 13:09:39.077524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.468 [2024-11-18 13:09:39.089156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.468 [2024-11-18 13:09:39.089569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-11-18 13:09:39.089586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.468 [2024-11-18 13:09:39.089594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.468 [2024-11-18 13:09:39.089757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.468 [2024-11-18 13:09:39.089921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.468 [2024-11-18 13:09:39.089930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.468 [2024-11-18 13:09:39.089936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.468 [2024-11-18 13:09:39.089943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.468 [2024-11-18 13:09:39.102088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.468 [2024-11-18 13:09:39.102512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-11-18 13:09:39.102557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.468 [2024-11-18 13:09:39.102581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.468 [2024-11-18 13:09:39.103161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.468 [2024-11-18 13:09:39.103345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.468 [2024-11-18 13:09:39.103359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.469 [2024-11-18 13:09:39.103367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.469 [2024-11-18 13:09:39.103373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.469 [2024-11-18 13:09:39.115031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.469 [2024-11-18 13:09:39.115449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-11-18 13:09:39.115495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.469 [2024-11-18 13:09:39.115519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.469 [2024-11-18 13:09:39.116104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.469 [2024-11-18 13:09:39.116269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.469 [2024-11-18 13:09:39.116278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.469 [2024-11-18 13:09:39.116286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.469 [2024-11-18 13:09:39.116294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.469 [2024-11-18 13:09:39.127961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.469 [2024-11-18 13:09:39.128385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-11-18 13:09:39.128402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.469 [2024-11-18 13:09:39.128409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.469 [2024-11-18 13:09:39.128573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.469 [2024-11-18 13:09:39.128737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.469 [2024-11-18 13:09:39.128746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.469 [2024-11-18 13:09:39.128753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.469 [2024-11-18 13:09:39.128759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.469 [2024-11-18 13:09:39.140792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.469 [2024-11-18 13:09:39.141192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-11-18 13:09:39.141237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.469 [2024-11-18 13:09:39.141260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.469 [2024-11-18 13:09:39.141856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.469 [2024-11-18 13:09:39.142421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.469 [2024-11-18 13:09:39.142431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.469 [2024-11-18 13:09:39.142438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.469 [2024-11-18 13:09:39.142445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.469 [2024-11-18 13:09:39.153684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.469 [2024-11-18 13:09:39.154027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-11-18 13:09:39.154043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.469 [2024-11-18 13:09:39.154052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.469 [2024-11-18 13:09:39.154215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.469 [2024-11-18 13:09:39.154385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.469 [2024-11-18 13:09:39.154395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.469 [2024-11-18 13:09:39.154418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.469 [2024-11-18 13:09:39.154426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.728 [2024-11-18 13:09:39.166700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.728 [2024-11-18 13:09:39.167091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.728 [2024-11-18 13:09:39.167108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.728 [2024-11-18 13:09:39.167118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.728 [2024-11-18 13:09:39.167282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.728 [2024-11-18 13:09:39.167471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.728 [2024-11-18 13:09:39.167481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.728 [2024-11-18 13:09:39.167488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.728 [2024-11-18 13:09:39.167495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.728 [2024-11-18 13:09:39.179522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.729 [2024-11-18 13:09:39.179920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.729 [2024-11-18 13:09:39.179937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.729 [2024-11-18 13:09:39.179944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.729 [2024-11-18 13:09:39.180108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.729 [2024-11-18 13:09:39.180272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.729 [2024-11-18 13:09:39.180282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.729 [2024-11-18 13:09:39.180288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.729 [2024-11-18 13:09:39.180294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.729 [2024-11-18 13:09:39.192356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.729 [2024-11-18 13:09:39.192759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.729 [2024-11-18 13:09:39.192776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.729 [2024-11-18 13:09:39.192784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.729 [2024-11-18 13:09:39.192947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.729 [2024-11-18 13:09:39.193111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.729 [2024-11-18 13:09:39.193121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.729 [2024-11-18 13:09:39.193127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.729 [2024-11-18 13:09:39.193133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.729 [2024-11-18 13:09:39.205233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.729 [2024-11-18 13:09:39.205637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.729 [2024-11-18 13:09:39.205683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.729 [2024-11-18 13:09:39.205707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.729 [2024-11-18 13:09:39.206286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.729 [2024-11-18 13:09:39.206494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.729 [2024-11-18 13:09:39.206507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.729 [2024-11-18 13:09:39.206514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.729 [2024-11-18 13:09:39.206522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.729 [2024-11-18 13:09:39.218052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.729 [2024-11-18 13:09:39.218413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.729 [2024-11-18 13:09:39.218431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.729 [2024-11-18 13:09:39.218439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.729 [2024-11-18 13:09:39.218604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.729 [2024-11-18 13:09:39.218768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.729 [2024-11-18 13:09:39.218778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.729 [2024-11-18 13:09:39.218785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.729 [2024-11-18 13:09:39.218791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.729 [2024-11-18 13:09:39.231228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.729 [2024-11-18 13:09:39.231610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.729 [2024-11-18 13:09:39.231655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.729 [2024-11-18 13:09:39.231680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.729 [2024-11-18 13:09:39.232260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.729 [2024-11-18 13:09:39.232736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.729 [2024-11-18 13:09:39.232746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.729 [2024-11-18 13:09:39.232753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.729 [2024-11-18 13:09:39.232760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.729 [2024-11-18 13:09:39.244071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.729 [2024-11-18 13:09:39.244382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.729 [2024-11-18 13:09:39.244399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.729 [2024-11-18 13:09:39.244407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.729 [2024-11-18 13:09:39.244570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.729 [2024-11-18 13:09:39.244734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.729 [2024-11-18 13:09:39.244743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.729 [2024-11-18 13:09:39.244749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.729 [2024-11-18 13:09:39.244758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.729 [2024-11-18 13:09:39.256898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.729 [2024-11-18 13:09:39.257304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.729 [2024-11-18 13:09:39.257349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.729 [2024-11-18 13:09:39.257389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.729 [2024-11-18 13:09:39.257815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.729 [2024-11-18 13:09:39.257989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.729 [2024-11-18 13:09:39.257999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.729 [2024-11-18 13:09:39.258006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.729 [2024-11-18 13:09:39.258012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.729 [2024-11-18 13:09:39.269752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.729 [2024-11-18 13:09:39.270151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.729 [2024-11-18 13:09:39.270167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.729 [2024-11-18 13:09:39.270175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.729 [2024-11-18 13:09:39.270339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.729 [2024-11-18 13:09:39.270530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.729 [2024-11-18 13:09:39.270540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.729 [2024-11-18 13:09:39.270546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.729 [2024-11-18 13:09:39.270553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.729 [2024-11-18 13:09:39.282553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.729 [2024-11-18 13:09:39.282893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.729 [2024-11-18 13:09:39.282910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.729 [2024-11-18 13:09:39.282917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.729 [2024-11-18 13:09:39.283081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.729 [2024-11-18 13:09:39.283245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.729 [2024-11-18 13:09:39.283254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.729 [2024-11-18 13:09:39.283260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.729 [2024-11-18 13:09:39.283267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.729 [2024-11-18 13:09:39.295455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.729 [2024-11-18 13:09:39.295858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.729 [2024-11-18 13:09:39.295874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.729 [2024-11-18 13:09:39.295882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.729 [2024-11-18 13:09:39.296045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.729 [2024-11-18 13:09:39.296209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.729 [2024-11-18 13:09:39.296218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.729 [2024-11-18 13:09:39.296225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.730 [2024-11-18 13:09:39.296231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.730 [2024-11-18 13:09:39.308359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.730 [2024-11-18 13:09:39.308777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.730 [2024-11-18 13:09:39.308793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.730 [2024-11-18 13:09:39.308802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.730 [2024-11-18 13:09:39.308965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.730 [2024-11-18 13:09:39.309128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.730 [2024-11-18 13:09:39.309138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.730 [2024-11-18 13:09:39.309144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.730 [2024-11-18 13:09:39.309150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.730 [2024-11-18 13:09:39.321174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.730 [2024-11-18 13:09:39.321600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.730 [2024-11-18 13:09:39.321645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.730 [2024-11-18 13:09:39.321668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.730 [2024-11-18 13:09:39.322246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.730 [2024-11-18 13:09:39.322842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.730 [2024-11-18 13:09:39.322852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.730 [2024-11-18 13:09:39.322858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.730 [2024-11-18 13:09:39.322865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.730 [2024-11-18 13:09:39.336286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.730 [2024-11-18 13:09:39.336764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.730 [2024-11-18 13:09:39.336787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.730 [2024-11-18 13:09:39.336802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.730 [2024-11-18 13:09:39.337056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.730 [2024-11-18 13:09:39.337312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.730 [2024-11-18 13:09:39.337326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.730 [2024-11-18 13:09:39.337336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.730 [2024-11-18 13:09:39.337346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.730 [2024-11-18 13:09:39.349479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.730 [2024-11-18 13:09:39.349835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.730 [2024-11-18 13:09:39.349853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.730 [2024-11-18 13:09:39.349861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.730 [2024-11-18 13:09:39.350040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.730 [2024-11-18 13:09:39.350219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.730 [2024-11-18 13:09:39.350229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.730 [2024-11-18 13:09:39.350236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.730 [2024-11-18 13:09:39.350242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.730 [2024-11-18 13:09:39.362478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.730 [2024-11-18 13:09:39.362810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.730 [2024-11-18 13:09:39.362827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.730 [2024-11-18 13:09:39.362835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.730 [2024-11-18 13:09:39.363009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.730 [2024-11-18 13:09:39.363185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.730 [2024-11-18 13:09:39.363194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.730 [2024-11-18 13:09:39.363201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.730 [2024-11-18 13:09:39.363207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.730 [2024-11-18 13:09:39.375571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.730 [2024-11-18 13:09:39.375968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.730 [2024-11-18 13:09:39.375985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.730 [2024-11-18 13:09:39.375994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.730 [2024-11-18 13:09:39.376167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.730 [2024-11-18 13:09:39.376365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.730 [2024-11-18 13:09:39.376378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.730 [2024-11-18 13:09:39.376386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.730 [2024-11-18 13:09:39.376393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.730 [2024-11-18 13:09:39.388759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.730 [2024-11-18 13:09:39.389188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.730 [2024-11-18 13:09:39.389206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.730 [2024-11-18 13:09:39.389214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.730 [2024-11-18 13:09:39.389398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.730 [2024-11-18 13:09:39.389578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.730 [2024-11-18 13:09:39.389587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.730 [2024-11-18 13:09:39.389595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.730 [2024-11-18 13:09:39.389602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.730 [2024-11-18 13:09:39.401814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.730 [2024-11-18 13:09:39.402246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.730 [2024-11-18 13:09:39.402265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.730 [2024-11-18 13:09:39.402273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.730 [2024-11-18 13:09:39.402457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.730 [2024-11-18 13:09:39.402643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.730 [2024-11-18 13:09:39.402653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.730 [2024-11-18 13:09:39.402660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.730 [2024-11-18 13:09:39.402666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.730 [2024-11-18 13:09:39.414903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.730 [2024-11-18 13:09:39.415266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.730 [2024-11-18 13:09:39.415284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.730 [2024-11-18 13:09:39.415292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.730 [2024-11-18 13:09:39.415488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.730 [2024-11-18 13:09:39.415668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.730 [2024-11-18 13:09:39.415678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.730 [2024-11-18 13:09:39.415685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.730 [2024-11-18 13:09:39.415696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.990 [2024-11-18 13:09:39.427898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.990 [2024-11-18 13:09:39.428163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.990 [2024-11-18 13:09:39.428180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.990 [2024-11-18 13:09:39.428188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.990 [2024-11-18 13:09:39.428366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.990 [2024-11-18 13:09:39.428542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.990 [2024-11-18 13:09:39.428552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.990 [2024-11-18 13:09:39.428559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.990 [2024-11-18 13:09:39.428565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.990 [2024-11-18 13:09:39.441054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.990 [2024-11-18 13:09:39.441484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.990 [2024-11-18 13:09:39.441527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.990 [2024-11-18 13:09:39.441554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.990 [2024-11-18 13:09:39.442099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.990 [2024-11-18 13:09:39.442278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.990 [2024-11-18 13:09:39.442287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.990 [2024-11-18 13:09:39.442294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.990 [2024-11-18 13:09:39.442301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.990 [2024-11-18 13:09:39.453985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.990 [2024-11-18 13:09:39.454344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.990 [2024-11-18 13:09:39.454406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.990 [2024-11-18 13:09:39.454430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.990 [2024-11-18 13:09:39.454990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.990 [2024-11-18 13:09:39.455389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.990 [2024-11-18 13:09:39.455409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.990 [2024-11-18 13:09:39.455423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.990 [2024-11-18 13:09:39.455438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.990 [2024-11-18 13:09:39.468721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.990 [2024-11-18 13:09:39.469151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.990 [2024-11-18 13:09:39.469173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.990 [2024-11-18 13:09:39.469184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.990 [2024-11-18 13:09:39.469446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.991 [2024-11-18 13:09:39.469703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.991 [2024-11-18 13:09:39.469715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.991 [2024-11-18 13:09:39.469725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.991 [2024-11-18 13:09:39.469735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.991 [2024-11-18 13:09:39.481897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.991 [2024-11-18 13:09:39.482219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.991 [2024-11-18 13:09:39.482237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.991 [2024-11-18 13:09:39.482245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.991 [2024-11-18 13:09:39.482429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.991 [2024-11-18 13:09:39.482615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.991 [2024-11-18 13:09:39.482624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.991 [2024-11-18 13:09:39.482633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.991 [2024-11-18 13:09:39.482640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.991 [2024-11-18 13:09:39.494839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.991 [2024-11-18 13:09:39.495190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.991 [2024-11-18 13:09:39.495207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.991 [2024-11-18 13:09:39.495214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.991 [2024-11-18 13:09:39.495382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.991 [2024-11-18 13:09:39.495546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.991 [2024-11-18 13:09:39.495555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.991 [2024-11-18 13:09:39.495562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.991 [2024-11-18 13:09:39.495568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.991 [2024-11-18 13:09:39.507654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.991 [2024-11-18 13:09:39.508041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.991 [2024-11-18 13:09:39.508058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.991 [2024-11-18 13:09:39.508065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.991 [2024-11-18 13:09:39.508232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.991 [2024-11-18 13:09:39.508402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.991 [2024-11-18 13:09:39.508412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.991 [2024-11-18 13:09:39.508418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.991 [2024-11-18 13:09:39.508425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.991 [2024-11-18 13:09:39.520515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.991 [2024-11-18 13:09:39.520885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.991 [2024-11-18 13:09:39.520902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.991 [2024-11-18 13:09:39.520910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.991 [2024-11-18 13:09:39.521073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.991 [2024-11-18 13:09:39.521236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.991 [2024-11-18 13:09:39.521246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.991 [2024-11-18 13:09:39.521252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.991 [2024-11-18 13:09:39.521258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.991 [2024-11-18 13:09:39.533438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.991 [2024-11-18 13:09:39.533790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.991 [2024-11-18 13:09:39.533834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.991 [2024-11-18 13:09:39.533858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.991 [2024-11-18 13:09:39.534363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.991 [2024-11-18 13:09:39.534528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.991 [2024-11-18 13:09:39.534538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.991 [2024-11-18 13:09:39.534544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.991 [2024-11-18 13:09:39.534550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.991 [2024-11-18 13:09:39.546322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.991 [2024-11-18 13:09:39.546672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.991 [2024-11-18 13:09:39.546689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.991 [2024-11-18 13:09:39.546696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.991 [2024-11-18 13:09:39.546860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.991 [2024-11-18 13:09:39.547024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.991 [2024-11-18 13:09:39.547036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.991 [2024-11-18 13:09:39.547043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.991 [2024-11-18 13:09:39.547049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.991 [2024-11-18 13:09:39.559235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.991 [2024-11-18 13:09:39.559513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.991 [2024-11-18 13:09:39.559530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.991 [2024-11-18 13:09:39.559538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.991 [2024-11-18 13:09:39.559702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.991 [2024-11-18 13:09:39.559866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.991 [2024-11-18 13:09:39.559876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.991 [2024-11-18 13:09:39.559882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.991 [2024-11-18 13:09:39.559889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.991 [2024-11-18 13:09:39.572122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.991 [2024-11-18 13:09:39.572415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.991 [2024-11-18 13:09:39.572433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.991 [2024-11-18 13:09:39.572440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.991 [2024-11-18 13:09:39.572603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.991 [2024-11-18 13:09:39.572767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.991 [2024-11-18 13:09:39.572776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.991 [2024-11-18 13:09:39.572783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.991 [2024-11-18 13:09:39.572789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.991 [2024-11-18 13:09:39.585224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.991 [2024-11-18 13:09:39.585551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.991 [2024-11-18 13:09:39.585569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.991 [2024-11-18 13:09:39.585577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.991 [2024-11-18 13:09:39.585755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.991 [2024-11-18 13:09:39.585933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.991 [2024-11-18 13:09:39.585943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.991 [2024-11-18 13:09:39.585950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.991 [2024-11-18 13:09:39.585961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.992 [2024-11-18 13:09:39.598325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.992 [2024-11-18 13:09:39.598740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.992 [2024-11-18 13:09:39.598758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.992 [2024-11-18 13:09:39.598766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.992 [2024-11-18 13:09:39.598944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.992 [2024-11-18 13:09:39.599123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.992 [2024-11-18 13:09:39.599133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.992 [2024-11-18 13:09:39.599140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.992 [2024-11-18 13:09:39.599146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.992 [2024-11-18 13:09:39.611518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.992 [2024-11-18 13:09:39.611923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.992 [2024-11-18 13:09:39.611941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.992 [2024-11-18 13:09:39.611949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.992 [2024-11-18 13:09:39.612127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.992 [2024-11-18 13:09:39.612306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.992 [2024-11-18 13:09:39.612316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.992 [2024-11-18 13:09:39.612323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.992 [2024-11-18 13:09:39.612329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.992 [2024-11-18 13:09:39.624708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.992 [2024-11-18 13:09:39.625063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.992 [2024-11-18 13:09:39.625082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.992 [2024-11-18 13:09:39.625090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.992 [2024-11-18 13:09:39.625268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.992 [2024-11-18 13:09:39.625452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.992 [2024-11-18 13:09:39.625462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.992 [2024-11-18 13:09:39.625469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.992 [2024-11-18 13:09:39.625476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.992 [2024-11-18 13:09:39.637987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.992 [2024-11-18 13:09:39.638428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.992 [2024-11-18 13:09:39.638447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.992 [2024-11-18 13:09:39.638455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.992 [2024-11-18 13:09:39.638633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.992 [2024-11-18 13:09:39.638813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.992 [2024-11-18 13:09:39.638823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.992 [2024-11-18 13:09:39.638830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.992 [2024-11-18 13:09:39.638837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.992 [2024-11-18 13:09:39.651197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.992 [2024-11-18 13:09:39.651637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.992 [2024-11-18 13:09:39.651655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.992 [2024-11-18 13:09:39.651663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.992 [2024-11-18 13:09:39.651841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.992 [2024-11-18 13:09:39.652020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.992 [2024-11-18 13:09:39.652030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.992 [2024-11-18 13:09:39.652037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.992 [2024-11-18 13:09:39.652044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.992 [2024-11-18 13:09:39.664234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.992 [2024-11-18 13:09:39.664668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.992 [2024-11-18 13:09:39.664686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.992 [2024-11-18 13:09:39.664694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.992 [2024-11-18 13:09:39.664872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.992 [2024-11-18 13:09:39.665051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.992 [2024-11-18 13:09:39.665060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.992 [2024-11-18 13:09:39.665067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.992 [2024-11-18 13:09:39.665075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.992 [2024-11-18 13:09:39.677434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.992 [2024-11-18 13:09:39.677867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.992 [2024-11-18 13:09:39.677885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:41.992 [2024-11-18 13:09:39.677893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:41.992 [2024-11-18 13:09:39.678076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:41.992 [2024-11-18 13:09:39.678256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.992 [2024-11-18 13:09:39.678266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.992 [2024-11-18 13:09:39.678272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.992 [2024-11-18 13:09:39.678279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.253 [2024-11-18 13:09:39.690476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.253 [2024-11-18 13:09:39.690906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.253 [2024-11-18 13:09:39.690924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.253 [2024-11-18 13:09:39.690932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.253 [2024-11-18 13:09:39.691111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.253 [2024-11-18 13:09:39.691293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.253 [2024-11-18 13:09:39.691303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.253 [2024-11-18 13:09:39.691310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.253 [2024-11-18 13:09:39.691317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.253 [2024-11-18 13:09:39.703533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.253 [2024-11-18 13:09:39.703891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.253 [2024-11-18 13:09:39.703908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.253 [2024-11-18 13:09:39.703917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.253 [2024-11-18 13:09:39.704094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.253 [2024-11-18 13:09:39.704272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.253 [2024-11-18 13:09:39.704283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.253 [2024-11-18 13:09:39.704290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.253 [2024-11-18 13:09:39.704296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.253 [2024-11-18 13:09:39.716667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.253 [2024-11-18 13:09:39.717028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.253 [2024-11-18 13:09:39.717046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.253 [2024-11-18 13:09:39.717054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.253 [2024-11-18 13:09:39.717232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.253 [2024-11-18 13:09:39.717417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.253 [2024-11-18 13:09:39.717431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.253 [2024-11-18 13:09:39.717438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.253 [2024-11-18 13:09:39.717446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.253 [2024-11-18 13:09:39.729813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.253 [2024-11-18 13:09:39.730244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.253 [2024-11-18 13:09:39.730261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.253 [2024-11-18 13:09:39.730269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.253 [2024-11-18 13:09:39.730452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.253 [2024-11-18 13:09:39.730631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.253 [2024-11-18 13:09:39.730640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.253 [2024-11-18 13:09:39.730648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.253 [2024-11-18 13:09:39.730654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.253 [2024-11-18 13:09:39.742859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.253 [2024-11-18 13:09:39.743289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.253 [2024-11-18 13:09:39.743307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.253 [2024-11-18 13:09:39.743315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.253 [2024-11-18 13:09:39.743500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.253 [2024-11-18 13:09:39.743680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.253 [2024-11-18 13:09:39.743690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.253 [2024-11-18 13:09:39.743697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.253 [2024-11-18 13:09:39.743703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.253 [2024-11-18 13:09:39.755941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.253 [2024-11-18 13:09:39.756373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.253 [2024-11-18 13:09:39.756392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.253 [2024-11-18 13:09:39.756401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.253 [2024-11-18 13:09:39.756574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.253 [2024-11-18 13:09:39.756747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.253 [2024-11-18 13:09:39.756757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.254 [2024-11-18 13:09:39.756764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.254 [2024-11-18 13:09:39.756778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.254 [2024-11-18 13:09:39.768888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.254 [2024-11-18 13:09:39.769311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.254 [2024-11-18 13:09:39.769327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.254 [2024-11-18 13:09:39.769335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.254 [2024-11-18 13:09:39.769506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.254 [2024-11-18 13:09:39.769671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.254 [2024-11-18 13:09:39.769680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.254 [2024-11-18 13:09:39.769687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.254 [2024-11-18 13:09:39.769693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.254 [2024-11-18 13:09:39.781699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.254 [2024-11-18 13:09:39.782033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.254 [2024-11-18 13:09:39.782079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.254 [2024-11-18 13:09:39.782102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.254 [2024-11-18 13:09:39.782699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.254 [2024-11-18 13:09:39.782919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.254 [2024-11-18 13:09:39.782929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.254 [2024-11-18 13:09:39.782936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.254 [2024-11-18 13:09:39.782944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.254 [2024-11-18 13:09:39.794659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.254 [2024-11-18 13:09:39.795053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.254 [2024-11-18 13:09:39.795098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.254 [2024-11-18 13:09:39.795123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.254 [2024-11-18 13:09:39.795719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.254 [2024-11-18 13:09:39.796108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.254 [2024-11-18 13:09:39.796117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.254 [2024-11-18 13:09:39.796123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.254 [2024-11-18 13:09:39.796130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.254 [2024-11-18 13:09:39.807481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.254 [2024-11-18 13:09:39.807749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.254 [2024-11-18 13:09:39.807766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.254 [2024-11-18 13:09:39.807774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.254 [2024-11-18 13:09:39.807938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.254 [2024-11-18 13:09:39.808101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.254 [2024-11-18 13:09:39.808110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.254 [2024-11-18 13:09:39.808117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.254 [2024-11-18 13:09:39.808124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.254 [2024-11-18 13:09:39.820393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.254 [2024-11-18 13:09:39.820752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.254 [2024-11-18 13:09:39.820768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.254 [2024-11-18 13:09:39.820776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.254 [2024-11-18 13:09:39.820939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.254 [2024-11-18 13:09:39.821102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.254 [2024-11-18 13:09:39.821111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.254 [2024-11-18 13:09:39.821118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.254 [2024-11-18 13:09:39.821125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.254 [2024-11-18 13:09:39.833391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.254 [2024-11-18 13:09:39.833809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.254 [2024-11-18 13:09:39.833853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.254 [2024-11-18 13:09:39.833877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.254 [2024-11-18 13:09:39.834281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.254 [2024-11-18 13:09:39.834453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.254 [2024-11-18 13:09:39.834463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.254 [2024-11-18 13:09:39.834470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.254 [2024-11-18 13:09:39.834476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.254 [2024-11-18 13:09:39.846263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.254 [2024-11-18 13:09:39.846686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.254 [2024-11-18 13:09:39.846730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.254 [2024-11-18 13:09:39.846754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.254 [2024-11-18 13:09:39.847246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.254 [2024-11-18 13:09:39.847418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.254 [2024-11-18 13:09:39.847429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.254 [2024-11-18 13:09:39.847436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.254 [2024-11-18 13:09:39.847443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.254 [2024-11-18 13:09:39.859088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.254 [2024-11-18 13:09:39.859493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.254 [2024-11-18 13:09:39.859512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.254 [2024-11-18 13:09:39.859520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.254 [2024-11-18 13:09:39.859683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.254 [2024-11-18 13:09:39.859847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.254 [2024-11-18 13:09:39.859856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.254 [2024-11-18 13:09:39.859863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.254 [2024-11-18 13:09:39.859870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.254 [2024-11-18 13:09:39.871967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.254 [2024-11-18 13:09:39.872374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.254 [2024-11-18 13:09:39.872420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.254 [2024-11-18 13:09:39.872444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.254 [2024-11-18 13:09:39.872958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.254 [2024-11-18 13:09:39.873123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.254 [2024-11-18 13:09:39.873131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.254 [2024-11-18 13:09:39.873137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.254 [2024-11-18 13:09:39.873142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.254 [2024-11-18 13:09:39.884873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.254 [2024-11-18 13:09:39.885293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.254 [2024-11-18 13:09:39.885338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.254 [2024-11-18 13:09:39.885378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.254 [2024-11-18 13:09:39.885959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.254 [2024-11-18 13:09:39.886148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.254 [2024-11-18 13:09:39.886159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.255 [2024-11-18 13:09:39.886166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.255 [2024-11-18 13:09:39.886172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.255 5600.00 IOPS, 21.88 MiB/s [2024-11-18T12:09:39.957Z] [2024-11-18 13:09:39.897774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.255 [2024-11-18 13:09:39.898114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.255 [2024-11-18 13:09:39.898131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.255 [2024-11-18 13:09:39.898139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.255 [2024-11-18 13:09:39.898302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.255 [2024-11-18 13:09:39.898474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.255 [2024-11-18 13:09:39.898485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.255 [2024-11-18 13:09:39.898491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.255 [2024-11-18 13:09:39.898498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.255 [2024-11-18 13:09:39.910695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.255 [2024-11-18 13:09:39.911119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.255 [2024-11-18 13:09:39.911167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.255 [2024-11-18 13:09:39.911192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.255 [2024-11-18 13:09:39.911787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.255 [2024-11-18 13:09:39.912280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.255 [2024-11-18 13:09:39.912289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.255 [2024-11-18 13:09:39.912295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.255 [2024-11-18 13:09:39.912302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.255 [2024-11-18 13:09:39.923624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.255 [2024-11-18 13:09:39.924046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.255 [2024-11-18 13:09:39.924063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.255 [2024-11-18 13:09:39.924071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.255 [2024-11-18 13:09:39.924235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.255 [2024-11-18 13:09:39.924406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.255 [2024-11-18 13:09:39.924416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.255 [2024-11-18 13:09:39.924423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.255 [2024-11-18 13:09:39.924433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.255 [2024-11-18 13:09:39.936522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.255 [2024-11-18 13:09:39.936937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.255 [2024-11-18 13:09:39.936955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.255 [2024-11-18 13:09:39.936962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.255 [2024-11-18 13:09:39.937124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.255 [2024-11-18 13:09:39.937288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.255 [2024-11-18 13:09:39.937297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.255 [2024-11-18 13:09:39.937304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.255 [2024-11-18 13:09:39.937311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.255 [2024-11-18 13:09:39.949569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.516 [2024-11-18 13:09:39.950045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.516 [2024-11-18 13:09:39.950090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.516 [2024-11-18 13:09:39.950114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.516 [2024-11-18 13:09:39.950712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.516 [2024-11-18 13:09:39.951148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.516 [2024-11-18 13:09:39.951166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.516 [2024-11-18 13:09:39.951181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.516 [2024-11-18 13:09:39.951195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.516 [2024-11-18 13:09:39.964497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.516 [2024-11-18 13:09:39.964994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.516 [2024-11-18 13:09:39.965016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.516 [2024-11-18 13:09:39.965026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.516 [2024-11-18 13:09:39.965280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.516 [2024-11-18 13:09:39.965543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.516 [2024-11-18 13:09:39.965557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.516 [2024-11-18 13:09:39.965567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.516 [2024-11-18 13:09:39.965576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.516 [2024-11-18 13:09:39.977485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.516 [2024-11-18 13:09:39.977903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.516 [2024-11-18 13:09:39.977947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.516 [2024-11-18 13:09:39.977971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.516 [2024-11-18 13:09:39.978474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.516 [2024-11-18 13:09:39.978648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.516 [2024-11-18 13:09:39.978656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.516 [2024-11-18 13:09:39.978663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.516 [2024-11-18 13:09:39.978669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.516 [2024-11-18 13:09:39.990268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.516 [2024-11-18 13:09:39.990683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.516 [2024-11-18 13:09:39.990701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.516 [2024-11-18 13:09:39.990709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.516 [2024-11-18 13:09:39.990882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.516 [2024-11-18 13:09:39.991055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.516 [2024-11-18 13:09:39.991065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.516 [2024-11-18 13:09:39.991072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.516 [2024-11-18 13:09:39.991078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.516 [2024-11-18 13:09:40.003368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.516 [2024-11-18 13:09:40.003782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.516 [2024-11-18 13:09:40.003800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.516 [2024-11-18 13:09:40.003808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.516 [2024-11-18 13:09:40.003988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.516 [2024-11-18 13:09:40.004167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.516 [2024-11-18 13:09:40.004178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.516 [2024-11-18 13:09:40.004185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.516 [2024-11-18 13:09:40.004192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.516 [2024-11-18 13:09:40.016510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.516 [2024-11-18 13:09:40.016875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.516 [2024-11-18 13:09:40.016893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.516 [2024-11-18 13:09:40.016904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.516 [2024-11-18 13:09:40.017083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.516 [2024-11-18 13:09:40.017262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.516 [2024-11-18 13:09:40.017272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.516 [2024-11-18 13:09:40.017279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.516 [2024-11-18 13:09:40.017285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.516 [2024-11-18 13:09:40.029606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.516 [2024-11-18 13:09:40.030045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.516 [2024-11-18 13:09:40.030063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.516 [2024-11-18 13:09:40.030072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.516 [2024-11-18 13:09:40.030278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.516 [2024-11-18 13:09:40.030486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.516 [2024-11-18 13:09:40.030499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.516 [2024-11-18 13:09:40.030507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.516 [2024-11-18 13:09:40.030516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.516 [2024-11-18 13:09:40.042672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.516 [2024-11-18 13:09:40.043090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.516 [2024-11-18 13:09:40.043108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.516 [2024-11-18 13:09:40.043117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.516 [2024-11-18 13:09:40.043295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.516 [2024-11-18 13:09:40.043480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.516 [2024-11-18 13:09:40.043490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.516 [2024-11-18 13:09:40.043498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.516 [2024-11-18 13:09:40.043505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.516 [2024-11-18 13:09:40.055722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.516 [2024-11-18 13:09:40.056147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.516 [2024-11-18 13:09:40.056165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.516 [2024-11-18 13:09:40.056173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.516 [2024-11-18 13:09:40.056358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.517 [2024-11-18 13:09:40.056541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.517 [2024-11-18 13:09:40.056552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.517 [2024-11-18 13:09:40.056559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.517 [2024-11-18 13:09:40.056565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.517 [2024-11-18 13:09:40.069117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.517 [2024-11-18 13:09:40.069725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.517 [2024-11-18 13:09:40.069745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.517 [2024-11-18 13:09:40.069753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.517 [2024-11-18 13:09:40.069950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.517 [2024-11-18 13:09:40.070147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.517 [2024-11-18 13:09:40.070157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.517 [2024-11-18 13:09:40.070165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.517 [2024-11-18 13:09:40.070172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.517 [2024-11-18 13:09:40.082270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.517 [2024-11-18 13:09:40.082708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.517 [2024-11-18 13:09:40.082726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.517 [2024-11-18 13:09:40.082735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.517 [2024-11-18 13:09:40.082912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.517 [2024-11-18 13:09:40.083091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.517 [2024-11-18 13:09:40.083101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.517 [2024-11-18 13:09:40.083108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.517 [2024-11-18 13:09:40.083115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.517 [2024-11-18 13:09:40.095426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.517 [2024-11-18 13:09:40.095871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.517 [2024-11-18 13:09:40.095916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.517 [2024-11-18 13:09:40.095940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.517 [2024-11-18 13:09:40.096547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.517 [2024-11-18 13:09:40.096728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.517 [2024-11-18 13:09:40.096738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.517 [2024-11-18 13:09:40.096745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.517 [2024-11-18 13:09:40.096756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.517 [2024-11-18 13:09:40.108496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.517 [2024-11-18 13:09:40.108927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.517 [2024-11-18 13:09:40.108971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.517 [2024-11-18 13:09:40.108995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.517 [2024-11-18 13:09:40.109590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.517 [2024-11-18 13:09:40.110044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.517 [2024-11-18 13:09:40.110053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.517 [2024-11-18 13:09:40.110060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.517 [2024-11-18 13:09:40.110068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.517 [2024-11-18 13:09:40.121593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.517 [2024-11-18 13:09:40.122028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.517 [2024-11-18 13:09:40.122045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.517 [2024-11-18 13:09:40.122053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.517 [2024-11-18 13:09:40.122231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.517 [2024-11-18 13:09:40.122417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.517 [2024-11-18 13:09:40.122427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.517 [2024-11-18 13:09:40.122435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.517 [2024-11-18 13:09:40.122443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.517 [2024-11-18 13:09:40.134796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.517 [2024-11-18 13:09:40.135249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.517 [2024-11-18 13:09:40.135294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.517 [2024-11-18 13:09:40.135318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.517 [2024-11-18 13:09:40.135854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.517 [2024-11-18 13:09:40.136035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.517 [2024-11-18 13:09:40.136045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.517 [2024-11-18 13:09:40.136052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.517 [2024-11-18 13:09:40.136059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.517 [2024-11-18 13:09:40.147912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.517 [2024-11-18 13:09:40.148329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.517 [2024-11-18 13:09:40.148346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.517 [2024-11-18 13:09:40.148362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.517 [2024-11-18 13:09:40.148540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.517 [2024-11-18 13:09:40.148719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.517 [2024-11-18 13:09:40.148729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.517 [2024-11-18 13:09:40.148735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.517 [2024-11-18 13:09:40.148742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.517 [2024-11-18 13:09:40.161022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.517 [2024-11-18 13:09:40.161454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.517 [2024-11-18 13:09:40.161472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.517 [2024-11-18 13:09:40.161480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.517 [2024-11-18 13:09:40.161665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.517 [2024-11-18 13:09:40.161839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.517 [2024-11-18 13:09:40.161849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.517 [2024-11-18 13:09:40.161856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.517 [2024-11-18 13:09:40.161862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.517 [2024-11-18 13:09:40.174174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.517 [2024-11-18 13:09:40.174616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.517 [2024-11-18 13:09:40.174661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.517 [2024-11-18 13:09:40.174685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.517 [2024-11-18 13:09:40.175127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.517 [2024-11-18 13:09:40.175305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.517 [2024-11-18 13:09:40.175314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.517 [2024-11-18 13:09:40.175321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.517 [2024-11-18 13:09:40.175327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.517 [2024-11-18 13:09:40.187362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.517 [2024-11-18 13:09:40.187805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.517 [2024-11-18 13:09:40.187849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.517 [2024-11-18 13:09:40.187881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.518 [2024-11-18 13:09:40.188477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.518 [2024-11-18 13:09:40.188754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.518 [2024-11-18 13:09:40.188763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.518 [2024-11-18 13:09:40.188770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.518 [2024-11-18 13:09:40.188776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.518 [2024-11-18 13:09:40.200437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.518 [2024-11-18 13:09:40.200798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.518 [2024-11-18 13:09:40.200815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.518 [2024-11-18 13:09:40.200824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.518 [2024-11-18 13:09:40.201001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.518 [2024-11-18 13:09:40.201179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.518 [2024-11-18 13:09:40.201189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.518 [2024-11-18 13:09:40.201196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.518 [2024-11-18 13:09:40.201202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.778 [2024-11-18 13:09:40.213560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.778 [2024-11-18 13:09:40.213997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.778 [2024-11-18 13:09:40.214014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.778 [2024-11-18 13:09:40.214022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.778 [2024-11-18 13:09:40.214201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.778 [2024-11-18 13:09:40.214388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.778 [2024-11-18 13:09:40.214398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.778 [2024-11-18 13:09:40.214407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.778 [2024-11-18 13:09:40.214416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.778 [2024-11-18 13:09:40.226594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.778 [2024-11-18 13:09:40.227002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.778 [2024-11-18 13:09:40.227020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.778 [2024-11-18 13:09:40.227028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.778 [2024-11-18 13:09:40.227206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.778 [2024-11-18 13:09:40.227395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.778 [2024-11-18 13:09:40.227406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.778 [2024-11-18 13:09:40.227413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.778 [2024-11-18 13:09:40.227421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.778 [2024-11-18 13:09:40.239776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.778 [2024-11-18 13:09:40.240212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.778 [2024-11-18 13:09:40.240257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.778 [2024-11-18 13:09:40.240281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.778 [2024-11-18 13:09:40.240876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.778 [2024-11-18 13:09:40.241320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.778 [2024-11-18 13:09:40.241329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.779 [2024-11-18 13:09:40.241336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.779 [2024-11-18 13:09:40.241343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.779 [2024-11-18 13:09:40.252881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.779 [2024-11-18 13:09:40.253285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.779 [2024-11-18 13:09:40.253302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.779 [2024-11-18 13:09:40.253311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.779 [2024-11-18 13:09:40.253494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.779 [2024-11-18 13:09:40.253674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.779 [2024-11-18 13:09:40.253684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.779 [2024-11-18 13:09:40.253692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.779 [2024-11-18 13:09:40.253698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.779 [2024-11-18 13:09:40.266030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.779 [2024-11-18 13:09:40.266499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.779 [2024-11-18 13:09:40.266545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.779 [2024-11-18 13:09:40.266570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.779 [2024-11-18 13:09:40.267080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.779 [2024-11-18 13:09:40.267258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.779 [2024-11-18 13:09:40.267268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.779 [2024-11-18 13:09:40.267277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.779 [2024-11-18 13:09:40.267288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.779 [2024-11-18 13:09:40.279132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.779 [2024-11-18 13:09:40.279542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.779 [2024-11-18 13:09:40.279559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.779 [2024-11-18 13:09:40.279567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.779 [2024-11-18 13:09:40.280142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.779 [2024-11-18 13:09:40.280338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.779 [2024-11-18 13:09:40.280348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.779 [2024-11-18 13:09:40.280361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.779 [2024-11-18 13:09:40.280368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.779 [2024-11-18 13:09:40.292215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.779 [2024-11-18 13:09:40.292673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.779 [2024-11-18 13:09:40.292719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.779 [2024-11-18 13:09:40.292742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.779 [2024-11-18 13:09:40.293255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.779 [2024-11-18 13:09:40.293441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.779 [2024-11-18 13:09:40.293451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.779 [2024-11-18 13:09:40.293458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.779 [2024-11-18 13:09:40.293467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.779 [2024-11-18 13:09:40.305353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.779 [2024-11-18 13:09:40.305790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.779 [2024-11-18 13:09:40.305808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.779 [2024-11-18 13:09:40.305817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.779 [2024-11-18 13:09:40.305995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.779 [2024-11-18 13:09:40.306173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.779 [2024-11-18 13:09:40.306183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.779 [2024-11-18 13:09:40.306191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.779 [2024-11-18 13:09:40.306198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.779 [2024-11-18 13:09:40.318564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.779 [2024-11-18 13:09:40.318973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.779 [2024-11-18 13:09:40.318990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.779 [2024-11-18 13:09:40.318998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.779 [2024-11-18 13:09:40.319171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.779 [2024-11-18 13:09:40.319344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.779 [2024-11-18 13:09:40.319361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.779 [2024-11-18 13:09:40.319368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.779 [2024-11-18 13:09:40.319375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.779 [2024-11-18 13:09:40.331737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.779 [2024-11-18 13:09:40.332167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.779 [2024-11-18 13:09:40.332184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.779 [2024-11-18 13:09:40.332192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.779 [2024-11-18 13:09:40.332377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.779 [2024-11-18 13:09:40.332555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.779 [2024-11-18 13:09:40.332565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.779 [2024-11-18 13:09:40.332572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.779 [2024-11-18 13:09:40.332579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.779 [2024-11-18 13:09:40.344910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.779 [2024-11-18 13:09:40.345372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.779 [2024-11-18 13:09:40.345417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.779 [2024-11-18 13:09:40.345442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.779 [2024-11-18 13:09:40.345989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.779 [2024-11-18 13:09:40.346167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.779 [2024-11-18 13:09:40.346176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.779 [2024-11-18 13:09:40.346183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.779 [2024-11-18 13:09:40.346190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.779 [2024-11-18 13:09:40.357958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.779 [2024-11-18 13:09:40.358406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.779 [2024-11-18 13:09:40.358425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.779 [2024-11-18 13:09:40.358437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.779 [2024-11-18 13:09:40.358616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.779 [2024-11-18 13:09:40.358796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.779 [2024-11-18 13:09:40.358806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.779 [2024-11-18 13:09:40.358813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.779 [2024-11-18 13:09:40.358820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.779 [2024-11-18 13:09:40.371032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.779 [2024-11-18 13:09:40.371466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.779 [2024-11-18 13:09:40.371509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.779 [2024-11-18 13:09:40.371535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.779 [2024-11-18 13:09:40.372116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.779 [2024-11-18 13:09:40.372365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.780 [2024-11-18 13:09:40.372375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.780 [2024-11-18 13:09:40.372382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.780 [2024-11-18 13:09:40.372389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.780 [2024-11-18 13:09:40.384075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.780 [2024-11-18 13:09:40.384438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.780 [2024-11-18 13:09:40.384456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.780 [2024-11-18 13:09:40.384464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.780 [2024-11-18 13:09:40.384643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.780 [2024-11-18 13:09:40.384822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.780 [2024-11-18 13:09:40.384831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.780 [2024-11-18 13:09:40.384838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.780 [2024-11-18 13:09:40.384845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.780 [2024-11-18 13:09:40.397197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.780 [2024-11-18 13:09:40.397645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.780 [2024-11-18 13:09:40.397690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.780 [2024-11-18 13:09:40.397714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.780 [2024-11-18 13:09:40.398272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.780 [2024-11-18 13:09:40.398477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.780 [2024-11-18 13:09:40.398486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.780 [2024-11-18 13:09:40.398494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.780 [2024-11-18 13:09:40.398501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2476593 Killed "${NVMF_APP[@]}" "$@" 00:26:42.780 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:42.780 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:42.780 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:42.780 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:42.780 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:42.780 [2024-11-18 13:09:40.410530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.780 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2477999 00:26:42.780 [2024-11-18 13:09:40.411017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.780 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2477999 00:26:42.780 [2024-11-18 13:09:40.411044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.780 [2024-11-18 13:09:40.411056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.780 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 2477999 ']' 00:26:42.780 [2024-11-18 13:09:40.411260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.780 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.780 [2024-11-18 13:09:40.411469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.780 [2024-11-18 13:09:40.411487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.780 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:42.780 [2024-11-18 13:09:40.411499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.780 [2024-11-18 13:09:40.411511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.780 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.780 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:42.780 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:42.780 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:42.780 [2024-11-18 13:09:40.423671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.780 [2024-11-18 13:09:40.424143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.780 [2024-11-18 13:09:40.424161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.780 [2024-11-18 13:09:40.424170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.780 [2024-11-18 13:09:40.424359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.780 [2024-11-18 13:09:40.424540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.780 [2024-11-18 13:09:40.424551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.780 [2024-11-18 13:09:40.424558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.780 [2024-11-18 13:09:40.424565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.780 [2024-11-18 13:09:40.436796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.780 [2024-11-18 13:09:40.437151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.780 [2024-11-18 13:09:40.437170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.780 [2024-11-18 13:09:40.437178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.780 [2024-11-18 13:09:40.437366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.780 [2024-11-18 13:09:40.437547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.780 [2024-11-18 13:09:40.437557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.780 [2024-11-18 13:09:40.437564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.780 [2024-11-18 13:09:40.437571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.780 [2024-11-18 13:09:40.449969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.780 [2024-11-18 13:09:40.450411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.780 [2024-11-18 13:09:40.450430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.780 [2024-11-18 13:09:40.450439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.780 [2024-11-18 13:09:40.450619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.780 [2024-11-18 13:09:40.450797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.780 [2024-11-18 13:09:40.450807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.780 [2024-11-18 13:09:40.450814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.780 [2024-11-18 13:09:40.450821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.780 [2024-11-18 13:09:40.461091] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:26:42.780 [2024-11-18 13:09:40.461136] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.780 [2024-11-18 13:09:40.463031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.780 [2024-11-18 13:09:40.463482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.780 [2024-11-18 13:09:40.463501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:42.780 [2024-11-18 13:09:40.463510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:42.780 [2024-11-18 13:09:40.463694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:42.780 [2024-11-18 13:09:40.463875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.780 [2024-11-18 13:09:40.463885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.780 [2024-11-18 13:09:40.463893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.780 [2024-11-18 13:09:40.463901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.041 [2024-11-18 13:09:40.476263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.041 [2024-11-18 13:09:40.476708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-11-18 13:09:40.476727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.041 [2024-11-18 13:09:40.476736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.041 [2024-11-18 13:09:40.476914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.041 [2024-11-18 13:09:40.477092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.041 [2024-11-18 13:09:40.477102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.041 [2024-11-18 13:09:40.477110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.041 [2024-11-18 13:09:40.477117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.041 [2024-11-18 13:09:40.489464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.041 [2024-11-18 13:09:40.489852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-11-18 13:09:40.489871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.041 [2024-11-18 13:09:40.489879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.041 [2024-11-18 13:09:40.490056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.041 [2024-11-18 13:09:40.490233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.041 [2024-11-18 13:09:40.490243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.041 [2024-11-18 13:09:40.490251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.041 [2024-11-18 13:09:40.490258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.041 [2024-11-18 13:09:40.502631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.041 [2024-11-18 13:09:40.503052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-11-18 13:09:40.503069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.041 [2024-11-18 13:09:40.503077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.041 [2024-11-18 13:09:40.503255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.041 [2024-11-18 13:09:40.503441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.041 [2024-11-18 13:09:40.503455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.041 [2024-11-18 13:09:40.503462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.041 [2024-11-18 13:09:40.503470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.041 [2024-11-18 13:09:40.515681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.041 [2024-11-18 13:09:40.516036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-11-18 13:09:40.516053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.041 [2024-11-18 13:09:40.516061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.041 [2024-11-18 13:09:40.516239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.041 [2024-11-18 13:09:40.516423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.041 [2024-11-18 13:09:40.516433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.041 [2024-11-18 13:09:40.516440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.041 [2024-11-18 13:09:40.516448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.041 [2024-11-18 13:09:40.528796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.041 [2024-11-18 13:09:40.529233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-11-18 13:09:40.529251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.041 [2024-11-18 13:09:40.529260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.041 [2024-11-18 13:09:40.529441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.041 [2024-11-18 13:09:40.529620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.041 [2024-11-18 13:09:40.529630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.041 [2024-11-18 13:09:40.529637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.041 [2024-11-18 13:09:40.529643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.041 [2024-11-18 13:09:40.541988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.041 [2024-11-18 13:09:40.542347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-11-18 13:09:40.542372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.041 [2024-11-18 13:09:40.542380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.041 [2024-11-18 13:09:40.542559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.041 [2024-11-18 13:09:40.542557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:43.041 [2024-11-18 13:09:40.542739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.041 [2024-11-18 13:09:40.542750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.041 [2024-11-18 13:09:40.542757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.041 [2024-11-18 13:09:40.542767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.041 [2024-11-18 13:09:40.555136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.041 [2024-11-18 13:09:40.555599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-11-18 13:09:40.555621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.041 [2024-11-18 13:09:40.555630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.041 [2024-11-18 13:09:40.555809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.041 [2024-11-18 13:09:40.555988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.041 [2024-11-18 13:09:40.555998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.041 [2024-11-18 13:09:40.556005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.041 [2024-11-18 13:09:40.556013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.041 [2024-11-18 13:09:40.568193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.041 [2024-11-18 13:09:40.568609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-11-18 13:09:40.568627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.041 [2024-11-18 13:09:40.568636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.041 [2024-11-18 13:09:40.568815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.041 [2024-11-18 13:09:40.568994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.042 [2024-11-18 13:09:40.569004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.042 [2024-11-18 13:09:40.569011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.042 [2024-11-18 13:09:40.569018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.042 [2024-11-18 13:09:40.581379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.042 [2024-11-18 13:09:40.581766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-11-18 13:09:40.581783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.042 [2024-11-18 13:09:40.581792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.042 [2024-11-18 13:09:40.581970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.042 [2024-11-18 13:09:40.582149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.042 [2024-11-18 13:09:40.582158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.042 [2024-11-18 13:09:40.582167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.042 [2024-11-18 13:09:40.582175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.042 [2024-11-18 13:09:40.585610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.042 [2024-11-18 13:09:40.585643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.042 [2024-11-18 13:09:40.585650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.042 [2024-11-18 13:09:40.585656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.042 [2024-11-18 13:09:40.585661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.042 [2024-11-18 13:09:40.587069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.042 [2024-11-18 13:09:40.587093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.042 [2024-11-18 13:09:40.587094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.042 [2024-11-18 13:09:40.594560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.042 [2024-11-18 13:09:40.595012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-11-18 13:09:40.595033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.042 [2024-11-18 13:09:40.595041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.042 [2024-11-18 13:09:40.595222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.042 [2024-11-18 13:09:40.595407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.042 [2024-11-18 13:09:40.595418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.042 [2024-11-18 13:09:40.595428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.042 [2024-11-18 13:09:40.595437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.042 [2024-11-18 13:09:40.607659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.042 [2024-11-18 13:09:40.608134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-11-18 13:09:40.608154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.042 [2024-11-18 13:09:40.608163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.042 [2024-11-18 13:09:40.608345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.042 [2024-11-18 13:09:40.608532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.042 [2024-11-18 13:09:40.608543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.042 [2024-11-18 13:09:40.608551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.042 [2024-11-18 13:09:40.608559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.042 [2024-11-18 13:09:40.620746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.042 [2024-11-18 13:09:40.621119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-11-18 13:09:40.621141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.042 [2024-11-18 13:09:40.621150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.042 [2024-11-18 13:09:40.621328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.042 [2024-11-18 13:09:40.621514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.042 [2024-11-18 13:09:40.621531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.042 [2024-11-18 13:09:40.621539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.042 [2024-11-18 13:09:40.621547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.042 [2024-11-18 13:09:40.633898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.042 [2024-11-18 13:09:40.634346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-11-18 13:09:40.634372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.042 [2024-11-18 13:09:40.634381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.042 [2024-11-18 13:09:40.634562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.042 [2024-11-18 13:09:40.634742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.042 [2024-11-18 13:09:40.634752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.042 [2024-11-18 13:09:40.634760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.042 [2024-11-18 13:09:40.634767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.042 [2024-11-18 13:09:40.646993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.042 [2024-11-18 13:09:40.647450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-11-18 13:09:40.647472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.042 [2024-11-18 13:09:40.647481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.042 [2024-11-18 13:09:40.647664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.042 [2024-11-18 13:09:40.647844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.042 [2024-11-18 13:09:40.647854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.042 [2024-11-18 13:09:40.647863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.042 [2024-11-18 13:09:40.647871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.042 [2024-11-18 13:09:40.660070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.042 [2024-11-18 13:09:40.660516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-11-18 13:09:40.660536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.042 [2024-11-18 13:09:40.660545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.042 [2024-11-18 13:09:40.660725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.042 [2024-11-18 13:09:40.660903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.042 [2024-11-18 13:09:40.660914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.042 [2024-11-18 13:09:40.660921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.042 [2024-11-18 13:09:40.660934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.042 [2024-11-18 13:09:40.673127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.042 [2024-11-18 13:09:40.673573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-11-18 13:09:40.673592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.043 [2024-11-18 13:09:40.673600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.043 [2024-11-18 13:09:40.673773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.043 [2024-11-18 13:09:40.673947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.043 [2024-11-18 13:09:40.673957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.043 [2024-11-18 13:09:40.673964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.043 [2024-11-18 13:09:40.673971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.043 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:43.043 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:26:43.043 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:43.043 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:43.043 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.043 [2024-11-18 13:09:40.686187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.043 [2024-11-18 13:09:40.686576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-11-18 13:09:40.686596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.043 [2024-11-18 13:09:40.686605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.043 [2024-11-18 13:09:40.686785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.043 [2024-11-18 13:09:40.686964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.043 [2024-11-18 13:09:40.686974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.043 [2024-11-18 13:09:40.686982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.043 [2024-11-18 13:09:40.686989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.043 [2024-11-18 13:09:40.699389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.043 [2024-11-18 13:09:40.699756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-11-18 13:09:40.699774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.043 [2024-11-18 13:09:40.699783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.043 [2024-11-18 13:09:40.699963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.043 [2024-11-18 13:09:40.700143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.043 [2024-11-18 13:09:40.700153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.043 [2024-11-18 13:09:40.700164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.043 [2024-11-18 13:09:40.700172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.043 [2024-11-18 13:09:40.712571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.043 [2024-11-18 13:09:40.712874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-11-18 13:09:40.712892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.043 [2024-11-18 13:09:40.712901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.043 [2024-11-18 13:09:40.713079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.043 [2024-11-18 13:09:40.713258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.043 [2024-11-18 13:09:40.713268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.043 [2024-11-18 13:09:40.713275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.043 [2024-11-18 13:09:40.713281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.043 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.043 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.043 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.043 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.043 [2024-11-18 13:09:40.720073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.043 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.043 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:43.043 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.043 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.043 [2024-11-18 13:09:40.725741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.043 [2024-11-18 13:09:40.726181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-11-18 13:09:40.726200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.043 [2024-11-18 13:09:40.726210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.043 [2024-11-18 13:09:40.726394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.043 [2024-11-18 13:09:40.726574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.043 [2024-11-18 13:09:40.726584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.043 [2024-11-18 13:09:40.726592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.043 [2024-11-18 13:09:40.726599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.303 [2024-11-18 13:09:40.738807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.303 [2024-11-18 13:09:40.739269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-11-18 13:09:40.739288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.303 [2024-11-18 13:09:40.739300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.303 [2024-11-18 13:09:40.739489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.303 [2024-11-18 13:09:40.739671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.303 [2024-11-18 13:09:40.739681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.303 [2024-11-18 13:09:40.739688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.303 [2024-11-18 13:09:40.739695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.303 [2024-11-18 13:09:40.751903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.303 [2024-11-18 13:09:40.752330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-11-18 13:09:40.752349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.303 [2024-11-18 13:09:40.752363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.303 [2024-11-18 13:09:40.752542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.303 [2024-11-18 13:09:40.752721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.303 [2024-11-18 13:09:40.752731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.303 [2024-11-18 13:09:40.752738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.303 [2024-11-18 13:09:40.752745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.303 Malloc0 00:26:43.303 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.303 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:43.303 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.303 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.303 [2024-11-18 13:09:40.765055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.303 [2024-11-18 13:09:40.765510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-11-18 13:09:40.765529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.303 [2024-11-18 13:09:40.765539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.303 [2024-11-18 13:09:40.765718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.303 [2024-11-18 13:09:40.765899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.303 [2024-11-18 13:09:40.765910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.303 [2024-11-18 13:09:40.765917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.303 [2024-11-18 13:09:40.765925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.303 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.303 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:43.303 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.303 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.303 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.303 [2024-11-18 13:09:40.778253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.303 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.303 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.303 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.303 [2024-11-18 13:09:40.778696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.303 [2024-11-18 13:09:40.778719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006500 with addr=10.0.0.2, port=4420 00:26:43.303 [2024-11-18 13:09:40.778728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006500 is same with the state(6) to be set 00:26:43.303 [2024-11-18 13:09:40.778916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006500 (9): Bad file descriptor 00:26:43.303 [2024-11-18 13:09:40.779096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.303 [2024-11-18 13:09:40.779106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.303 [2024-11-18 13:09:40.779113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.303 [2024-11-18 13:09:40.779121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.303 [2024-11-18 13:09:40.782144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.303 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.303 13:09:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2477074 00:26:43.303 [2024-11-18 13:09:40.791387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.303 [2024-11-18 13:09:40.812922] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:44.241 4803.50 IOPS, 18.76 MiB/s [2024-11-18T12:09:43.322Z] 5740.43 IOPS, 22.42 MiB/s [2024-11-18T12:09:44.258Z] 6397.62 IOPS, 24.99 MiB/s [2024-11-18T12:09:45.196Z] 6943.89 IOPS, 27.12 MiB/s [2024-11-18T12:09:46.132Z] 7357.70 IOPS, 28.74 MiB/s [2024-11-18T12:09:47.070Z] 7699.27 IOPS, 30.08 MiB/s [2024-11-18T12:09:48.005Z] 7981.17 IOPS, 31.18 MiB/s [2024-11-18T12:09:48.942Z] 8230.46 IOPS, 32.15 MiB/s [2024-11-18T12:09:50.319Z] 8436.29 IOPS, 32.95 MiB/s 00:26:52.617 Latency(us) 00:26:52.617 [2024-11-18T12:09:50.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.618 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:52.618 Verification LBA range: start 0x0 length 0x4000 00:26:52.618 Nvme1n1 : 15.01 8608.33 33.63 10697.69 0.00 6609.97 438.09 14588.88 00:26:52.618 [2024-11-18T12:09:50.320Z] =================================================================================================================== 00:26:52.618 [2024-11-18T12:09:50.320Z] Total : 8608.33 33.63 10697.69 0.00 6609.97 438.09 14588.88 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:52.618 rmmod nvme_tcp 00:26:52.618 rmmod nvme_fabrics 00:26:52.618 rmmod nvme_keyring 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2477999 ']' 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2477999 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 2477999 ']' 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 2477999 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2477999 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2477999' 00:26:52.618 killing process with pid 2477999 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 2477999 00:26:52.618 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 2477999 00:26:52.878 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:52.878 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:52.878 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:52.878 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:52.878 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:52.878 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:52.878 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:52.878 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:52.878 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:52.878 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.878 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.878 13:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.783 13:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:54.783 00:26:54.783 real 0m26.854s 00:26:54.783 user 1m3.263s 00:26:54.784 sys 0m6.912s 00:26:54.784 13:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:54.784 13:09:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.784 ************************************ 00:26:54.784 END TEST nvmf_bdevperf 00:26:54.784 ************************************ 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.044 ************************************ 00:26:55.044 START TEST nvmf_target_disconnect 00:26:55.044 ************************************ 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:55.044 * Looking for test storage... 00:26:55.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:55.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.044 --rc genhtml_branch_coverage=1 00:26:55.044 --rc genhtml_function_coverage=1 00:26:55.044 --rc genhtml_legend=1 00:26:55.044 --rc geninfo_all_blocks=1 00:26:55.044 --rc geninfo_unexecuted_blocks=1 00:26:55.044 00:26:55.044 ' 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:55.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.044 --rc genhtml_branch_coverage=1 00:26:55.044 --rc genhtml_function_coverage=1 00:26:55.044 --rc genhtml_legend=1 00:26:55.044 --rc geninfo_all_blocks=1 00:26:55.044 --rc geninfo_unexecuted_blocks=1 00:26:55.044 00:26:55.044 ' 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:55.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.044 --rc genhtml_branch_coverage=1 00:26:55.044 --rc genhtml_function_coverage=1 00:26:55.044 --rc genhtml_legend=1 00:26:55.044 --rc geninfo_all_blocks=1 00:26:55.044 --rc geninfo_unexecuted_blocks=1 00:26:55.044 00:26:55.044 ' 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:55.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.044 --rc genhtml_branch_coverage=1 00:26:55.044 --rc genhtml_function_coverage=1 00:26:55.044 --rc genhtml_legend=1 00:26:55.044 --rc geninfo_all_blocks=1 00:26:55.044 --rc geninfo_unexecuted_blocks=1 00:26:55.044 00:26:55.044 ' 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.044 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:55.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.045 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.304 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:55.304 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:55.304 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:55.304 13:09:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:00.767 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:00.767 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:00.767 Found net devices under 0000:86:00.0: cvl_0_0 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:00.767 Found net devices under 0000:86:00.1: cvl_0_1 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:00.767 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.768 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:01.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:27:01.027 00:27:01.027 --- 10.0.0.2 ping statistics --- 00:27:01.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.027 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:27:01.027 00:27:01.027 --- 10.0.0.1 ping statistics --- 00:27:01.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.027 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:01.027 ************************************ 00:27:01.027 START TEST nvmf_target_disconnect_tc1 00:27:01.027 ************************************ 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:01.027 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:01.287 [2024-11-18 13:09:58.786106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.287 [2024-11-18 13:09:58.786152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e45ab0 with addr=10.0.0.2, port=4420 00:27:01.287 [2024-11-18 13:09:58.786170] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:01.287 [2024-11-18 13:09:58.786182] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:01.287 [2024-11-18 13:09:58.786189] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:01.287 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:01.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:01.287 Initializing NVMe Controllers 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:01.287 00:27:01.287 real 0m0.116s 00:27:01.287 user 0m0.048s 00:27:01.287 sys 0m0.068s 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:01.287 ************************************ 00:27:01.287 END TEST nvmf_target_disconnect_tc1 00:27:01.287 ************************************ 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:01.287 ************************************ 00:27:01.287 START TEST nvmf_target_disconnect_tc2 00:27:01.287 ************************************ 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2483174 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2483174 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2483174 ']' 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:01.287 13:09:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.287 [2024-11-18 13:09:58.927612] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:27:01.287 [2024-11-18 13:09:58.927655] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.547 [2024-11-18 13:09:58.990080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:01.547 [2024-11-18 13:09:59.032698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.547 [2024-11-18 13:09:59.032737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.547 [2024-11-18 13:09:59.032744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.547 [2024-11-18 13:09:59.032750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.547 [2024-11-18 13:09:59.032755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.547 [2024-11-18 13:09:59.034393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:01.547 [2024-11-18 13:09:59.034503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:01.547 [2024-11-18 13:09:59.034524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:01.547 [2024-11-18 13:09:59.034526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.547 Malloc0 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.547 [2024-11-18 13:09:59.204800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.547 [2024-11-18 13:09:59.237027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.547 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.806 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.806 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2483203 00:27:01.806 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:01.806 13:09:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:03.721 13:10:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2483174 00:27:03.721 13:10:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 [2024-11-18 13:10:01.269372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Read completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.721 Write completed with error (sct=0, sc=8) 00:27:03.721 starting I/O failed 00:27:03.722 [2024-11-18 13:10:01.269576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 [2024-11-18 13:10:01.269781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Write completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 Read completed with error (sct=0, sc=8) 00:27:03.722 starting I/O failed 00:27:03.722 [2024-11-18 13:10:01.269974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.722 [2024-11-18 13:10:01.270215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-11-18 13:10:01.270239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-11-18 13:10:01.270420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-11-18 13:10:01.270434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-11-18 13:10:01.270525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-11-18 13:10:01.270536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-11-18 13:10:01.270747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-11-18 13:10:01.270758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-11-18 13:10:01.270829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-11-18 13:10:01.270840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-11-18 13:10:01.271072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-11-18 13:10:01.271084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-11-18 13:10:01.271260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-11-18 13:10:01.271290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.271439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.271472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.271637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.271677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.271823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.271836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.272041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.272053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.272271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.272303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.272606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.272639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.272829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.272862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.273139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.273170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.273341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.273381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.273617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.273638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.273710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.273739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.273862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.273895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.274075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.274108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.274303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.274335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.274475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.274508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.274712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.274745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.274885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.274917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.275055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.275087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.275213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.275246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.275380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.275422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.275519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.275530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.275667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.275679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.275828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.275860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.276156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.276189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.276419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.276432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.276631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.276662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.276768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.276800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.277045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.277078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.277275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.277307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.277485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.277518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.277659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.277700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-11-18 13:10:01.277848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-11-18 13:10:01.277860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.277965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.277975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.278039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.278049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.278137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.278148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.278293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.278324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.278511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.278550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.279930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.279988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.280210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.280243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.280444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.280478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.280600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.280634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.280814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.280846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.281095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.281127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.281313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.281347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.281552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.281586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.281706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.281738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.281874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.281906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.282023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.282056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.282253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.282286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.282482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.282515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.282653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.282687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.282871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.282905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.283105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.283138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.283309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.283341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.283474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.283508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.283700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.283732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.283930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.283963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.284244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.284277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.284458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.284491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.284599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.284629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.284767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.284800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.284938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.284970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.285159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.285191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.285375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.285414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-11-18 13:10:01.285605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-11-18 13:10:01.285637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.285746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.285778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.286034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.286067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.286346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.286402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.286585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.286617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.286746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.286779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.286981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.287013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.287298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.287330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.287558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.287591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.287774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.287807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.288017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.288050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.288259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.288291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.288506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.288539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.288725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.288758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.288900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.288932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.289149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.289182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.289385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.289419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.289601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.289632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.289869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.289902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.290185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.290217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.290433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.290466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.290656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.290690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.290815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.290848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.291059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.291090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.291286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.291318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.291606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.291641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.291782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.291815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.291961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.291994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.292190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.292222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.292436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.292469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.292595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.292627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.292803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.292836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.293010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.293041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.293260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.293291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.293489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-11-18 13:10:01.293521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-11-18 13:10:01.293696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.293728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.293888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.293920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.294106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.294138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.294311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.294344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.294474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.294506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.294653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.294691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.294931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.294964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.295197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.295228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.295368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.295402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.295598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.295631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.295968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.296001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.296213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.296246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.296442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.296476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.296740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.296772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.296982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.297015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.297204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.297237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.297491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.297524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.297791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.297825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.297967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.298001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.298140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.298174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.298446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.298479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.298761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.298794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.298933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.298965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-11-18 13:10:01.299096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-11-18 13:10:01.299128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.299320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.299387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.299520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.299553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.299741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.299774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.299892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.299924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.300204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.300237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.300500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.300534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.300724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.300756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.300899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.300931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.301119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.301159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.301300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.301332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.301561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.301595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.301716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.301749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.301856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.301888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.302078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.302111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.302362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.302397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.302527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.302560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.302685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.302717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.302891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.302923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.303211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.303243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.303454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.303487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.303711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.303742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.303864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.303896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.304031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.304065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.304241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.304273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.304533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.304567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.304776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.304809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.305103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.305135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.305402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.305436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.305585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.305618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.305789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.305821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.306044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.306076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-11-18 13:10:01.306314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-11-18 13:10:01.306347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.306613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.306646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.306772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.306804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.306916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.306949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.307141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.307173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.307325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.307384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.307564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.307597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.307745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.307777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.308099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.308132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.308395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.308428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.308565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.308598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.308838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.308871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.309145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.309177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.309373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.309406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.309593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.309626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.309811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.309843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.310120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.310153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.310366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.310400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.310595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.310634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.310869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.310902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.311016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.311049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.311239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.311272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.311453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.311488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.311731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.311763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.311889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.311921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.312167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.312200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.312393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.312426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.312564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.312596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.312773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.312806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.312929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.312961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.313183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.313216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.313485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.313519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.313712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.313745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.313874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-11-18 13:10:01.313907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-11-18 13:10:01.314023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.314056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.314319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.314360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.314643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.314676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.314870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.314902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.315134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.315166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.315376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.315410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.315519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.315550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.315742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.315775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.316050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.316083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.316339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.316382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.316596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.316628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.316819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.316853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.317112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.317144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.317334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.317396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.317593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.317626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.317763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.317796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.318045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.318077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.318256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.318288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.318551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.318585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.318772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.318806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.318935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.318966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.319231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.319263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.319383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.319418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.319554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.319586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.319795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.319827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.320000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.320073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.320293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.320329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.320538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.320572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.320695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.320728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.320920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.320951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.321137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.321169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-11-18 13:10:01.321431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-11-18 13:10:01.321465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.321584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.321615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.321733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.321766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.322011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.322044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.322246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.322279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.322555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.322589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.322739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.322771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.322903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.322943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.323253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.323285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.323530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.323564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.323749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.323781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.323987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.324019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.324262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.324295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.324557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.324591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.324711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.324743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.324942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.324973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.325167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.325200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.325449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.325483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.325599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.325631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.325851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.325882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.326073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.326107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.326384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.326418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.326612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.326643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.326911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.326945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.327137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.327169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.327347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.327388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.327608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.327641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.327771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.327803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.328053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.328085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.328276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.328309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.328524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.328557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-11-18 13:10:01.328760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-11-18 13:10:01.328793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.328936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.328969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.329155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.329188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.329428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.329514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.329671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.329708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.329839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.329872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.329996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.330030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.330268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.330302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.330504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.330538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.330728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.330761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.330936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.330969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.331165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.331197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.331388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.331423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.331550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.331583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.331715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.331747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.331885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.331918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.332044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.332077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.332269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.332302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.332426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.332459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.332590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.332623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.332766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.332797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.333057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.333090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.333274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.333307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.333543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.333576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.333757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.333789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.333962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.333996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.334271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.334303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.334564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.334597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.334794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.334827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.334966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.334997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.335131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.335165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.335307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.335339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.335488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.335521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.335654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-11-18 13:10:01.335687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-11-18 13:10:01.335808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.335840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.336021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.336054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.336244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.336276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.336635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.336669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.336810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.336843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.337164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.337196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.337375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.337409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.337587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.337619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.337821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.337853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.338097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.338136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.338318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.338350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.338622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.338655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.338841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.338873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.339121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.339153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.339370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.339404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.339672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.339704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.340004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.340036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.340229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.340262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.340389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.340422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.340617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.340650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.340911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.340944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.341117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.341148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-11-18 13:10:01.341392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-11-18 13:10:01.341425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.341631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.341664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.341839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.341871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.342187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.342220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.342406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.342439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.342594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.342628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.342851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.342885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.343186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.343218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.343328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.343367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.343582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.343614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.343754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.343786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.343911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.343943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.344197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.344229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.344413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.344446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.344766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.344799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.344954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.344987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.345278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.345309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.345516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.345550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.345748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.345780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.345928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.345960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.346172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.346204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.346428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.346462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.346644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.346677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.346860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.346892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.347178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.347210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.347408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.347442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.347638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.347670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.347862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.347900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.348033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.348066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.348280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.348312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.348503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-11-18 13:10:01.348537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-11-18 13:10:01.348680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.348713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.348956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.348989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.349099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.349131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.349336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.349403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.349538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.349573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.349748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.349781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.350051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.350084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.350299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.350331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.350551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.350584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.350831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.350864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.351064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.351097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.351340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.351386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.351657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.351690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.351903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.351934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.352179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.352212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.352400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.352434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.352677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.352710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.352828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.352861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.353105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.353137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.353446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.353479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.353787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.353820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.354084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.354116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.354239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.354271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.354472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.354507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.354755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.354788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.355064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.355096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.355340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.355382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.355627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.355660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.355858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.355891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.356093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.356126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.356371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.356405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.356617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.356649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-11-18 13:10:01.356828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-11-18 13:10:01.356860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.357055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.357087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.357415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.357453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.357573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.357605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.357826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.357867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.358117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.358150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.358338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.358381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.358574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.358607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.358825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.358858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.359120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.359153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.359407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.359440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.359650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.359683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.359924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.359957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.360202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.360234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.360500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.360534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.360827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.360859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.361054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.361087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.361297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.361329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.361485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.361519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.361657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.361690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.361838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.361870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.361982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.362015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.362199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.362232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.362429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.362463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.362606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.362640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.362843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.362876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.363072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.363105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.363291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.363324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.363627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.363662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.363920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.363953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.364141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.364174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-11-18 13:10:01.364518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-11-18 13:10:01.364553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.364789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.364823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.364933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.364966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.365238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.365270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.365414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.365449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.365647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.365679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.365962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.365994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.366207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.366240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.366516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.366550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.366754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.366787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.367058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.367091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.367373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.367407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.367546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.367579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.367757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.367797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.367941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.367973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.368088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.368121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.368345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.368387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.368661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.368693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.368872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.368905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.369214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.369247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.369478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.369512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.369650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.369683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.369948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.369981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.370295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.370328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.370543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.370577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.370705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.370738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.370954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.370987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.371262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.371295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.371594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.371628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.371883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.371916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.372128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.372160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.372409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.372442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.372638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.372671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.372799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-11-18 13:10:01.372832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-11-18 13:10:01.373119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.373151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.373292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.373324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.373533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.373566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.373765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.373796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.373986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.374018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.374216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.374249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.374556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.374592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.374802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.374835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.375105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.375138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.375311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.375345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.375629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.375663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.375782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.375815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.375966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.375999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.376269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.376302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.376585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.376620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.376878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.376910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.377190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.377224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.377520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.377553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.377757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.377790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.377992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.378031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.378248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.378280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.378473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.378507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.378758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.378790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.379019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.379052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.379317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.379350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.379558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.379591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.379747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.379780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.379978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.380011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.380215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.380248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.737 [2024-11-18 13:10:01.380389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.737 [2024-11-18 13:10:01.380424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.737 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.380612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.380644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.380791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.380823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.381075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.381109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.381295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.381328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.381530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.381565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.381764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.381798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.381980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.382013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.382289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.382321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.382465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.382500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.382621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.382655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.382905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.382938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.383138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.383172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.383396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.383429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.383653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.383686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.383844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.383878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.384191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.384240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.384453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.384488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.384745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.384779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.385001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.385033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.385234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.385267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.385524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.385558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.385774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.385808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.385949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.385981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.386244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.386277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.386454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.386488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.386765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.386798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.386990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.387023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.387229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.387261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.738 [2024-11-18 13:10:01.387553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.738 [2024-11-18 13:10:01.387588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.738 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.387797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.387835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.388140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.388172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.388482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.388516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.388795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.388828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.389063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.389097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.389278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.389310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.389488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.389523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.389812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.389845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.390073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.390106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.390232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.390265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.390480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.390514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.390723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.390755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.391004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.391037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.391314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.391347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.391587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.391619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.391895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.391928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.392130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.392164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.392383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.392418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.392619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.392653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.392840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.392873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.393122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.393155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.393404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.393438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.393731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.393763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.393996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.394029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.394218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.394251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.394502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.394536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.394750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.394783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.395053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.395087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.395219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.395252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.395445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.395478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.395612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.395644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.395848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.739 [2024-11-18 13:10:01.395881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.739 qpair failed and we were unable to recover it. 00:27:03.739 [2024-11-18 13:10:01.396154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.396185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.396393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.396426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.396629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.396661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.396884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.396916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.397118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.397151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.397420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.397454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.397649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.397682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.397825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.397857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.398130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.398169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.398441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.398476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.398695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.398727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.398862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.398896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.399205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.399237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.399458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.399491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.399690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.399722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.399981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.400014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.400268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.400301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.400505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.400539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.400726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.400758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.400968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.401001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.401204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.401235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.401434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.401468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.401725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.401758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.402083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.402116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.402314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.402348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.402638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.402671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.402798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.402831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.403085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.403117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.403318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.403359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.403516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.403549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.403694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.403727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.403912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.403945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.404199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.740 [2024-11-18 13:10:01.404232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.740 qpair failed and we were unable to recover it. 00:27:03.740 [2024-11-18 13:10:01.404370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.404405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:03.741 [2024-11-18 13:10:01.404520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.404551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:03.741 [2024-11-18 13:10:01.406235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.406297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:03.741 [2024-11-18 13:10:01.406542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.406577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:03.741 [2024-11-18 13:10:01.406790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.406824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:03.741 [2024-11-18 13:10:01.406983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.407016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:03.741 [2024-11-18 13:10:01.407311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.407343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:03.741 [2024-11-18 13:10:01.407482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.407517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:03.741 [2024-11-18 13:10:01.407746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.407779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:03.741 [2024-11-18 13:10:01.408079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.408112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:03.741 [2024-11-18 13:10:01.408321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.408364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:03.741 [2024-11-18 13:10:01.408645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.408679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:03.741 [2024-11-18 13:10:01.408890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.408923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:03.741 [2024-11-18 13:10:01.409142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.409175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:03.741 [2024-11-18 13:10:01.409428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.741 [2024-11-18 13:10:01.409463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:03.741 qpair failed and we were unable to recover it. 00:27:04.018 [2024-11-18 13:10:01.409676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-11-18 13:10:01.409720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-11-18 13:10:01.409878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-11-18 13:10:01.409914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-11-18 13:10:01.410268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-11-18 13:10:01.410302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.410508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.410542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.410763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.410796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.411091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.411123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.411305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.411338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.411549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.411583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.411718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.411751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.412032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.412066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.412278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.412311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.412470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.412505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.412802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.412836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.413054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.413087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.413294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.413327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.413553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.413587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.413795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.413827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.414030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.414063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.414263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.414295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.414430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.414465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.414699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.414732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.415043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.415076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.415253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.415286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.415488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.415523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.415830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.415864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.416087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.416120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.416322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.416364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.416569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.416646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.416931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.416969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.417249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.417284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.417482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.417518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.417722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.417756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.418034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.418067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-11-18 13:10:01.418266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-11-18 13:10:01.418300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.418520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.418555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.418739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.418773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.418905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.418938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.419151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.419185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.419321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.419362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.419567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.419601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.419758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.419801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.420014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.420046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.420298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.420331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.420484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.420518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.420745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.420777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.420961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.420995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.421299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.421332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.421466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.421500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.421655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.421690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.421822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.421855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.421984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.422017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.422198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.422232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.422504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.422539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.422736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.422771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.423093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.423126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.423385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.423419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.423672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.423706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.423991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.424025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.424303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.424337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.424557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.424590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.424869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.424902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.425104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.425137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.425322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.425363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.425618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.425652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.425862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-11-18 13:10:01.425894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-11-18 13:10:01.426179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.426213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.426380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.426415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.426604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.426639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.426791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.426823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.427120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.427155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.427392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.427427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.427584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.427617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.427812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.427845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.428125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.428157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.428415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.428449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.428750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.428782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.428965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.428998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.429197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.429230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.429440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.429474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.429684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.429717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.429927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.429967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.430169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.430203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.430398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.430433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.430691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.430724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.430983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.431016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.431365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.431400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.431608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.431640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.431856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.431889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.432078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.432111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.432394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.432430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.432647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.432680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.432939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.432972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.433268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.433301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.433500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.433536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.433801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.433835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.434130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-11-18 13:10:01.434165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-11-18 13:10:01.434315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.434348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.434567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.434602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.434813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.434846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.435156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.435189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.435300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.435334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.435559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.435593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.435806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.435839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.435960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.435993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.436273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.436307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.436544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.436580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.436800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.436834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.437055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.437089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.437323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.437376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.437633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.437667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.437861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.437894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.438178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.438212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.438331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.438375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.438510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.438543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.438723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.438757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.438945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.438978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.439280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.439313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.439522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.439556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.439773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.439806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.440077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.440111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.440331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.440379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.440647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.440681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.440901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.440935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.441098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.441132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.441405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.441439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.441649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.441682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.441910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.441943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.442135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.442168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-11-18 13:10:01.442374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-11-18 13:10:01.442409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.442528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.442561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.442706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.442738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.443015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.443049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.443306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.443340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.443574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.443606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.443866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.443900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.444106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.444139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.444324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.444365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.444565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.444598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.444783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.444816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.445011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.445043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.445393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.445430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.445563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.445596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.445747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.445779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.445932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.445965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.446242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.446274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.446397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.446432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.446643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.446680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.446894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.446928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.447112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.447145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.447328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.447369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.447575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.447607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.447753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.447785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.448062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.448095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.448300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.448332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.448472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.448505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.448707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.448741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.449063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.449096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.449290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.449323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.449497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.449530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.449735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.449769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.449956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.449995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.450228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-11-18 13:10:01.450261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-11-18 13:10:01.450401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.450435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.450619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.450652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.450848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.450882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.451104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.451137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.451279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.451312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.451460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.451494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.451700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.451731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.452019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.452053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.452335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.452377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.452629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.452662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.452814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.452848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.452993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.453026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.453248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.453281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.453608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.453642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.453783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.453815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.454021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.454052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.454256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.454288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.454548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.454582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.454835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.454868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.455178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.455211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.455403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.455438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.455704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.455737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.455930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.455963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.456144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.456178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.456374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.456409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.456639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.456672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.456878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-11-18 13:10:01.456911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-11-18 13:10:01.457153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.457188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.457317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.457349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.457638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.457672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.457857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.457890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.458214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.458247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.458463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.458497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.458625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.458657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.458912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.458945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.459211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.459243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.459476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.459510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.459728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.459760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.459885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.459918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.460126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.460159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.460411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.460447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.460605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.460637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.460843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.460876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.461074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.461106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.461317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.461375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.461507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.461540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.461689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.461723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.461910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.461943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.462222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.462256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.462563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.462598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.462744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.462777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.462916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.462950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.463170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.463204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.463457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.463493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.463822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.463856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.464133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.464167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.464391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.464424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.464589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.464623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.464841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.464875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.465118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-11-18 13:10:01.465151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-11-18 13:10:01.465431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.465470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.465620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.465654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.465865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.465898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.466206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.466240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.466513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.466547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.466818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.466864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.467189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.467222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.467491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.467525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.467670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.467704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.468010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.468043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.468319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.468360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.468583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.468617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.468895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.468928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.469215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.469249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.469444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.469479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.469614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.469648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.469857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.469891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.470168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.470201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.470333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.470376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.470541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.470575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.470831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.470865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.470991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.471025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.471238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.471271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.471487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.471521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.471703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.471736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.471859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.471892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.472003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.472035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.472257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.472293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.472503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.472537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.472796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.472829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-11-18 13:10:01.473046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-11-18 13:10:01.473078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.473218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.473252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.473470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.473506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.473738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.473771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.474006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.474039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.474291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.474324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.474554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.474587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.474786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.474819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.475114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.475147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.475330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.475373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.475527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.475560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.475701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.475735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.475930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.475963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.476155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.476188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.476343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.476387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.476544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.476586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.476788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.476822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.477149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.477183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.477340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.477385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.477600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.477633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.477822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.477854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.478135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.478168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.478365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.478400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.478534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.478568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.478774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.478807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.480380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.480443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.480746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.480781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.480926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.480960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.481166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.481199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.481422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.481457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.481601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.481636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.481769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.481801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.481947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.481980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.482236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.482270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.482546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.482581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-11-18 13:10:01.482783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-11-18 13:10:01.482816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.483092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.483126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.483433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.483467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.483616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.483648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.483779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.483812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.484095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.484129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.484408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.484442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.484598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.484633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.484758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.484791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.485062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.485098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.485366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.485400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.485545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.485578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.485706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.485738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.485933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.485968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.486246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.486280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.486481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.486516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.486796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.486830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.486995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.487028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.487232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.487266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.487475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.487509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.487664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.487703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.487960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.487992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.488184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.488216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.488457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.488490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.488745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.488780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.488982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.489015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.489200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.489233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.489397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.489431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.489587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.489620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.489817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.489850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.490125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.490159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.490373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.490407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.490537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.490570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-11-18 13:10:01.490809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-11-18 13:10:01.490842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.491158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.491192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.491395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.491430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.491652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.491684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.491882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.491916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.492220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.492254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.492395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.492430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.492632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.492665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.492921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.492955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.493150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.493184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.493490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.493525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.493730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.493764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.493970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.494002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.494142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.494175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.494476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.494511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.494639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.494673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.494924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.494958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.495214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.495247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.495387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.495421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.495628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.495662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.495877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.495911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.496038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.496070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.496281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.496314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.496505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.496539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.496793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.496825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.497068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.497101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.497227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.497261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.497456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.497497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.497648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-11-18 13:10:01.497680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-11-18 13:10:01.497794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.497826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.498090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.498123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.498271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.498304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.498512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.498548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.498760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.498792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.498942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.498975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.499210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.499243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.499367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.499400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.499561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.499595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.499713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.499745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.499998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.500031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.500211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.500243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.500443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.500478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.500639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.500672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.500882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.500915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.501217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.501251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.501490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.501525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.501726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.501759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.501916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.501948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.502228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.502261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.502521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.502556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.502766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.502799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.503084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.503116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.503314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.503348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.503511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.503544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.503851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.503885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.504147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.504180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.504448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.504483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.504737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.504770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.505023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.505057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.505319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-11-18 13:10:01.505360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-11-18 13:10:01.505601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.505634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.505789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.505823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.506041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.506074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.506270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.506302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.506440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.506474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.506755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.506789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.506919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.506951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.507141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.507180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.507403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.507439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.507635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.507668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.507877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.507911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.508180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.508213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.508500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.508535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.508695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.508729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.508932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.508966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.509104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.509138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.509346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.509390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.509598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.509632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.509767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.509799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.510106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.510140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.510394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.510429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.510719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.510752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.510963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.510996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.511186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.511219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.511350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.511393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.511543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.511575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.511729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.511761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.511962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.511995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.512276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.512310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.512452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.512487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.512668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.512701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-11-18 13:10:01.512845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-11-18 13:10:01.512878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.513094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.513129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.513407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.513442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.513643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.513678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.513932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.513966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.514266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.514299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.514597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.514632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.514900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.514934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.515148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.515181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.515450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.515484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.515669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.515702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.515906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.515940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.516091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.516124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.516244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.516277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.516557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.516591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.516748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.516781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.516999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.517039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.517252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.517286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.517565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.517599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.517797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.517830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.518011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.518045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.518324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.518368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.518656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.518690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.518840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.518873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.519190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.519223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.519428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.519462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.521075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.521137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.521392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.521427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.521582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.521616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.523083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.523139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.523402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.523439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.523650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.523685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-11-18 13:10:01.523967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-11-18 13:10:01.524002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.524237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.524271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.524553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.524588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.524867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.524901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.525051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.525085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.525277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.525311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.525609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.525644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.525865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.525900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.526117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.526151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.526345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.526389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.526621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.526656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.526791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.526824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.527053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.527088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.527346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.527396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.527554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.527588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.527806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.527839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.528025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.528060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.528328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.528373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.528609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.528643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.528772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.528808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.528999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.529032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.529182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.529217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.529502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.529538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.529742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.529775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.529957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.529998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.530282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.530316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.530484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.530519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.530659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.530693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.530907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.530943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.531076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.531108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.531335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.531378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-11-18 13:10:01.531635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-11-18 13:10:01.531671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.531878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.531914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.532187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.532220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.532504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.532539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.532838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.532872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.533020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.533053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.533307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.533341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.533559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.533595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.533877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.533910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.534118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.534153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.534408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.534444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.534630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.534664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.534864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.534897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.535244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.535276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.535500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.535538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.535728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.535762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.535924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.535959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.536158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.536193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.536412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.536447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.536568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.536601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.536902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.536936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.537150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.537184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.537461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.537495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.537609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.537642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.537838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.537870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.538209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.538242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.538441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.538476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.538616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.538649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.538903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.538935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.539129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.539164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.539418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.539452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.539714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-11-18 13:10:01.539747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-11-18 13:10:01.540045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.540078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.540377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.540418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.540576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.540610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.540886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.540918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.541173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.541207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.541394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.541429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.541634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.541667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.541851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.541886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.542094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.542127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.542317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.542349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.542569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.542602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.542758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.542790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.542926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.542959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.543148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.543183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.543310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.543342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.543563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.543598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.543806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.543838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.544041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.544074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.544327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.544372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.544603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.544636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.544890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.544924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.545132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.545166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.545311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.545344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.545590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.545624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.545809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.545841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.546166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.546200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.546334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.546380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.546670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-11-18 13:10:01.546705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-11-18 13:10:01.547041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.547138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.547492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.547537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.547746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.547781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.547995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.548030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.548292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.548328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.548493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.548527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.548786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.548821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.551191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.551258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.551509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.551546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.551739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.551774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.551934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.551967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.552233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.552268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.552532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.552569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.552765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.552807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.553092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.553126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.553404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.553438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.553671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.553706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.553870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.553904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.554131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.554165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.554372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.554407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.554683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.554718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.554942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.554978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.555251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.555285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.555508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.555543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.555822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.555856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.556041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.556074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.556279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.556313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.556473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.556508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.556660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.556696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.556895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-11-18 13:10:01.556927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-11-18 13:10:01.557136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.557170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.557382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.557417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.557617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.557651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.557877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.557910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.558115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.558148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.558279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.558314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.558529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.558564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.558774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.558808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.559060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.559095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.559375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.559412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.559534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.559568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.559762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.559797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.559932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.559966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.560251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.560285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.560448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.560482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.560694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.560729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.560934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.560967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.561259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.561295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.561543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.561578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.561785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.561821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.562024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.562057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.562256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.562291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.562575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.562610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.562815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.562855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.562982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.563015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.563225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.563259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.563417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.563453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.563583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.563618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.563846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.563880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.564075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.564110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.564300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-11-18 13:10:01.564333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-11-18 13:10:01.564532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.564566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.564706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.564739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.564952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.564987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.565255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.565286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.565502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.565536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.565676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.565711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.565985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.566020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.566226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.566259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.566532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.566569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.566793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.566827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.566989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.567021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.567206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.567240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.567468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.567504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.567689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.567722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.567927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.567961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.568160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.568192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.568320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.568366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.568607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.568642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.568844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.568879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.569085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.569119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.569324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.569388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.569600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.569636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.569853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.569887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.570004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.570038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.570224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.570258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.570482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.570519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.570670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.570706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.570910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.570945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.571133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.571167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.571376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.571412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.571543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.571578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.571806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.571840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.572029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.572069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.572328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-11-18 13:10:01.572371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-11-18 13:10:01.572492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.572526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.572687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.572720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.574255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.574317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.574598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.574635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.574882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.574917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.575147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.575181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.575484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.575521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.575787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.575821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.576045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.576077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.576285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.576321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.576557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.576592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.576747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.576780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.577060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.577094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.577380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.577416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.577620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.577655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.577911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.577946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.578238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.578272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.578521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.578557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.578758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.578794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.578928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.578962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.579265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.579299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.579594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.579629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.579842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.579875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.580019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.580052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.580272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.580304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.580513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.580589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.580743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.580781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.580908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.580942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.581079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.581115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.581299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.581334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.581560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.581593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.581785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.581818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.582067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-11-18 13:10:01.582102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-11-18 13:10:01.582380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.582417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.582613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.582646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.582781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.582815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.582932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.582967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.583224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.583259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.585294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.585384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.585693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.585733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.585990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.586024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.586341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.586463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.586750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.586785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.587011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.587046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.587233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.587265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.587476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.587513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.587677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.587711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.587845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.587879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.588186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.588222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.588409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.588447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.588628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.588664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.588826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.588859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.589064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.589105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.589369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.589406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.589617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.589650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.589782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.589816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.590048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.590082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.590279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.590312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.590519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.590555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.590684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.590717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.590994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-11-18 13:10:01.591030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-11-18 13:10:01.591317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.591363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.591579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.591614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.591866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.591901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.592215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.592249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.592485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.592519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.592663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.592697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.592830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.592866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.593108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.593142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.593426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.593461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.594130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.594181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.594478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.594515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.594772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.594809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.594938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.594973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.595253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.595290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.595515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.595550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.595761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.595795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.596082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.596115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.596399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.596437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.596593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.596626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.596783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.596818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.597028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.597063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.597265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.597300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.597575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.597610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.597897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.597933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.598203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.598236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.598505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.598539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.598791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.598826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.598977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.599012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.599196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.599229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.599459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.599495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.599735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-11-18 13:10:01.599769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-11-18 13:10:01.600041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.600076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.600378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.600421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.600674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.600708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.601007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.601042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.601227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.601260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.601423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.601459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.601651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.601686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.601826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.601863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.602063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.602096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.602377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.602411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.602696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.602730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.603018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.603054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.603261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.603295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.603440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.603478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.603759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.603792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.603986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.604022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.604209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.604242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.604512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.604547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.604819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.604851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.604981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.605016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.605205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.605238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.605458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.605493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.605706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.605739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.605871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.605906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.606216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.606249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.606444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.606478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.606623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.606657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.606929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.606964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.607220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.607258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.607516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.607552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.607785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.607819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.608004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.608037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.608288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.608324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.608549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.608584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.608811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-11-18 13:10:01.608846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-11-18 13:10:01.608980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.609015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.609229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.609262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.609474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.609509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.609647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.609681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.609935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.609970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.610307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.610342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.610571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.610606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.610823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.610856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.611084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.611119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.611324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.611370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.611565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.611598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.611844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.611879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.612113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.612147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.612336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.612380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.612533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.612569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.612770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.612804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.613103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.613137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.613330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.613374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.613561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.613596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.613797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.613829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.614096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.614131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.614350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.614413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.614601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.614637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.614816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.614848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.615062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.615096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.615284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.615317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.615534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.615568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.615753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.615786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.616058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.616091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.616273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.616306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.616602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.616638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.616852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.616887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-11-18 13:10:01.617068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-11-18 13:10:01.617104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.617313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.617345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.617547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.617587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.617791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.617825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.618030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.618064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.618339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.618403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.618589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.618622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.618753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.618788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.619012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.619046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.619325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.619370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.619565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.619598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.619787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.619821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.620005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.620040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.620245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.620280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.620558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.620593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.620872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.620906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.621237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.621271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.621576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.621612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.621819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.621852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.622053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.622088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.622281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.622315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.622513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.622548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.622731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.622765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.623050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.623083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.623210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.623244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.623452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.623488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.623751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.623785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.624041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.624076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.624381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.624419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.624676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.624715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.624912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.624946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.625149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.625184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.625463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.625497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-11-18 13:10:01.625797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-11-18 13:10:01.625832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.626042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.626077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.626297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.626330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.626547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.626583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.626765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.626797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.626986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.627021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.627211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.627247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.627506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.627540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.627797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.627831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.628035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.628070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.628275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.628310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.628512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.628547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.628826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.628859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.629122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.629156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.629376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.629412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.629598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.629633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.629909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.629944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.630129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.630164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.630422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.630459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.630739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.630772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.631048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.631082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.631279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.631312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.631457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.631492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.631766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.631799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.632002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.632037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.632242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.632276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-11-18 13:10:01.632478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-11-18 13:10:01.632513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.632711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.632746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.633001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.633036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.633341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.633387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.633577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.633614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.633805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.633837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.634108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.634144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.634434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.634470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.634602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.634637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.634839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.634874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.635103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.635136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.635421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.635462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.635616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.635653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.635935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.635968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.636265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.636298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.636493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.636528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.636841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.636876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.636989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.637023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.637301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.637336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.637609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.637643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.638091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.638131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.638345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.638402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.638596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.638631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.638752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.638787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.638994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.639028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.639154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.639190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.639471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.639506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.639662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.639696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.639912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.639947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.640161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.640195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.640398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.640433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.640639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.640673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.640926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.640961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.641082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.641119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.641249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-11-18 13:10:01.641284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-11-18 13:10:01.641482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.641517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.641720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.641755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.642032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.642066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.642297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.642343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.642507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.642541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.642730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.642764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.642988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.643021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.643247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.643280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.643567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.643602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.643740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.643774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.643955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.643989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.644266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.644302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.644556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.644591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.644792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.644827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.645094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.645129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.645329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.645374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.645562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.645596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.645786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.645821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.646117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.646151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.646283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.646317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.646600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.646635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.646757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.646792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.646918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.646952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.647081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.647117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.647302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.647336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.647464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.647500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.647680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.647713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.647908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.647943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.648070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.648104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.648317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-11-18 13:10:01.648362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-11-18 13:10:01.648495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.648528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.648689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.648727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.648855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.648889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.649089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.649124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.649263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.649296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.649505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.649542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.649671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.649704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.649886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.649922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.650106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.650141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.650332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.650376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.650512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.650545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.650728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.650762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.651024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.651059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.651203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.651239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.651467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.651508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.651762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.651796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.651922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.651957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.652144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.652180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.652436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.652473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.652676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.652711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.652941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.652975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.653224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.653260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.653561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.653597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.653792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.653827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.653978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.654013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.654278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.654312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.654511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.654571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.654766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.654802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.655066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.655100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.655381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.655418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.655643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.655677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-11-18 13:10:01.655893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-11-18 13:10:01.655927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.656053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.656089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.656272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.656306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.656532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.656567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.656769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.656804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.657095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.657128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.657272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.657306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.657546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.657583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.657711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.657744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.658010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.658044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.658269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.658305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.658550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.658585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.658761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.658795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.659052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.659088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.659341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.659388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.659594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.659628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.659836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.659872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.660147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.660180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.660404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.660440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.660627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.660662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.660844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.660879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.661134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.661169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.661363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.661398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.661655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.661688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.661953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.661987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.662270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.662304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.662436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.662471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.662769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.662804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.662992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.663027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.663312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.663346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.663552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.663587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.663785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.663818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.663959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.663995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.664273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.664307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.664582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.664617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.664878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.664911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.049 [2024-11-18 13:10:01.665215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.049 [2024-11-18 13:10:01.665249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.049 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.665464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.665500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.665713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.665749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.665946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.665981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.666261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.666295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.666531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.666566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.666846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.666879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.667159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.667193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.667432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.667466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.667579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.667613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.667876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.667910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.668108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.668143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.668404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.668439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.668642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.668678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.668878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.668911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.669188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.669228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.669422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.669457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.669713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.669749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.669997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.670031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.670313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.670347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.670555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.670590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.670849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.670883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.671094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.671128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.671390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.671426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.671724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.671759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.671938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.671971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.672249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.672283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.672544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.672578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.672781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.672817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.673085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.673120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.673333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.673380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.673597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.673630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.050 [2024-11-18 13:10:01.673770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-11-18 13:10:01.673805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.050 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.674057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.674093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.674287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.674320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.674554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.674589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.674730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.674764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.675003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.675039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.675229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.675264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.675462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.675498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.675781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.675815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.675956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.675990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.676177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.676210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.676413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.676451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.676723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.676756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.677035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.677070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.677364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.677400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.677599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.677633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.677772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.677805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.677925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.677962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.678216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.678250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.678527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.678563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.678694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.678728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.678998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.679032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.679236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.679270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.679471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.679506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.679654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.679688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.679967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.680000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.680185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.680220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.680508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-11-18 13:10:01.680544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.051 qpair failed and we were unable to recover it. 00:27:04.051 [2024-11-18 13:10:01.680766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.680801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.681012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.681047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.681297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.681331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.681600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.681634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.681841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.681874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.682080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.682115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.682305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.682340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.682470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.682504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.682784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.682817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.683086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.683121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.683400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.683438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.683632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.683668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.683852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.683887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.684104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.684139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.684345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.684392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.684650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.684685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.684889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.684922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.685100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.685135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.685374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.685410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.685616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.685649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.685914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.685949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.686102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.686137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.686398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.686433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.686563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.686605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.686717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.686749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.686884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.686919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.687105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.687138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.687256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.687292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.687487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.687524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.687649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.687684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.687867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.687901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.688180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.688216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.688489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.688524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.052 [2024-11-18 13:10:01.688743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-11-18 13:10:01.688776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.052 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.689045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.689079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.689220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.689254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.689465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.689500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.689711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.689747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.689935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.689969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.690245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.690279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.690482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.690517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.690724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.690758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.690942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.690976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.691233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.691267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.691544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.691579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.691805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.691839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.691966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.692000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.692312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.692347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.692640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.692675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.692928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.692963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.693172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.693206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.693366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.693404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.693537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.693572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.693768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.693804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.694072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.694106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.694238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.694274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.694553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.694589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.694784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.694820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.694948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.694983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.695182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.695217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.695526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.695562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.695701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.695736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.695917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.695952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.696223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.696258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.696442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.696483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.696754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.696789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.696990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.697027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.053 [2024-11-18 13:10:01.697307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.053 [2024-11-18 13:10:01.697342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.053 qpair failed and we were unable to recover it. 00:27:04.054 [2024-11-18 13:10:01.697587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.054 [2024-11-18 13:10:01.697623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.054 qpair failed and we were unable to recover it. 00:27:04.054 [2024-11-18 13:10:01.697820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.054 [2024-11-18 13:10:01.697855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.054 qpair failed and we were unable to recover it. 00:27:04.054 [2024-11-18 13:10:01.698077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.054 [2024-11-18 13:10:01.698111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.054 qpair failed and we were unable to recover it. 00:27:04.054 [2024-11-18 13:10:01.698375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.054 [2024-11-18 13:10:01.698413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.054 qpair failed and we were unable to recover it. 00:27:04.054 [2024-11-18 13:10:01.698623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.054 [2024-11-18 13:10:01.698657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.054 qpair failed and we were unable to recover it. 00:27:04.054 [2024-11-18 13:10:01.698978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.054 [2024-11-18 13:10:01.699012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.054 qpair failed and we were unable to recover it. 00:27:04.054 [2024-11-18 13:10:01.699312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.054 [2024-11-18 13:10:01.699349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.054 qpair failed and we were unable to recover it. 00:27:04.054 [2024-11-18 13:10:01.699656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.054 [2024-11-18 13:10:01.699689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.054 qpair failed and we were unable to recover it. 00:27:04.331 [2024-11-18 13:10:01.699820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-11-18 13:10:01.699856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-11-18 13:10:01.700076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-11-18 13:10:01.700110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-11-18 13:10:01.700421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-11-18 13:10:01.700458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-11-18 13:10:01.700674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-11-18 13:10:01.700707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-11-18 13:10:01.700935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-11-18 13:10:01.700969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-11-18 13:10:01.701152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-11-18 13:10:01.701186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.701299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.701333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.701531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.701566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.701778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.701813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.701997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.702029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.702225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.702259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.702441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.702476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.702684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.702718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.702981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.703017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.703203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.703237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.703372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.703417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.703627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.703662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.703845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.703881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.704079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.704112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.704317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.704365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.704522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.704557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.704756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.704790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.704973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.705008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.705262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.705296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.705564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.705599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.705809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.705843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.706035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.706070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.706333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.706385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.706614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.706651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.706866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.706901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.707181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.707215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.707543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.707580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.707784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.707818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.708001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.708036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.708243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.708277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.708462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.708498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.708693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.708728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.708872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.708908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.709181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.709215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.709520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.709558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.709747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.709782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.710037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.710070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.710326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-11-18 13:10:01.710368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-11-18 13:10:01.710662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.710697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.710939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.710975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.711110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.711145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.711263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.711299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.711512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.711549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.711739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.711774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.711982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.712016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.712275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.712311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.712567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.712602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.712856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.712891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.713086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.713118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.713304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.713341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.713556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.713591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.713792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.713831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.714096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.714130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.714382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.714417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.714724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.714757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.714960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.714995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.715324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.715366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.715553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.715587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.715782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.715815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.716026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.716061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.716243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.716276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.716467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.716501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.716781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.716816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.716994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.717027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.717298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.717333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.717674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.717710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.717915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.717949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.718225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.718260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.718398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.718433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.718564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.718598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.718810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.718843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.719049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.719083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.719405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.719440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.719651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.719684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.719882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.719916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.720171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-11-18 13:10:01.720204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-11-18 13:10:01.720482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.720518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.720714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.720749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.720965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.721007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.721212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.721245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.721394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.721431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.721691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.721724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.722009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.722043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.722223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.722257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.722442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.722478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.722691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.722727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.722976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.723009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.723262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.723297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.723506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.723542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.723816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.723852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.723980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.724014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.724208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.724243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.724450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.724485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.724689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.724723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.724904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.724940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.725130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.725164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.725395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.725430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.725687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.725720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.725851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.725884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.726010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.726042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.726155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.726188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.726404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.726439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.726561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.726597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.726882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.726917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.727147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.727182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.727455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.727489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.727626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.727662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.727849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.727883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.728072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.728107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.728315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.728348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.728505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.728539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.728731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.728765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.728945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.728977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.729105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.729141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-11-18 13:10:01.729394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-11-18 13:10:01.729430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.729692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.729727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.729981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.730016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.730265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.730299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.730593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.730631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.730853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.730892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.731194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.731228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.731487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.731523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.731821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.731856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.732141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.732176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.732455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.732491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.732692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.732726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.732862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.732911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.733223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.733258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.733414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.733452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.733650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.733685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.733913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.733948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.734149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.734183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.734377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.734413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.734632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.734665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.734847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.734879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.735178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.735212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.735451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.735486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.735611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.735646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.735780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.735815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.736009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.736041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.736318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.736363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.736639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.736673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.736954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.736988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.737194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.737228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.737412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.737446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.737643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.737676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.737800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.737842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.738135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.738170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.738424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.738460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.738687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.738719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.738992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.739027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.739148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.739180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-11-18 13:10:01.739434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-11-18 13:10:01.739470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.739774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.739809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.740019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.740053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.740192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.740227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.740432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.740466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.740647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.740681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.740873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.740908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.741173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.741207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.741498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.741533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.741749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.741783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.741978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.742012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.742218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.742254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.742512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.742546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.742745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.742779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.742964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.742998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.743197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.743233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.743507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.743543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.743845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.743877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.744158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.744193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.744309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.744342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.744571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.744607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.744830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.744862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.745126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.745161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.745372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.745410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.745540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.745573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.745896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.745928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.746204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.746238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.746468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.746502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.746752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.746784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.746977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.747011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.747192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.747225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.747441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.747475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-11-18 13:10:01.747671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-11-18 13:10:01.747705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.747960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.747995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.748168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.748201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.748484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.748525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.748747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.748780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.749081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.749114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.749401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.749437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.749713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.749747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.750010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.750043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.750338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.750382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.750539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.750571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.750852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.750885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.751170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.751202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.751381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.751415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.751558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.751590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.751797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.751831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.752044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.752077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.752324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.752367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.752594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.752626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.752880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.752914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.753224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.753257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.753463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.753497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.753630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.753663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.753856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.753890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.754005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.754038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.754330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.754374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.754565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.754599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.754872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.754905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.755085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.755119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.755377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.755414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.755548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.755581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.755861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.755894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.756144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.756179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.756466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.756501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.756786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.756819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.757074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.757109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.757323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.757383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.757640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.757673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.337 qpair failed and we were unable to recover it. 00:27:04.337 [2024-11-18 13:10:01.757929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.337 [2024-11-18 13:10:01.757962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.758157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.758192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.758418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.758453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.758683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.758718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.758919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.758954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.759136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.759168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.759419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.759455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.759594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.759628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.759854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.759887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.760113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.760146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.760274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.760308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.760518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.760552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.760829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.760862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.761067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.761100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.761377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.761411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.761613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.761646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.761764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.761797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.762060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.762093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.762371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.762406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.762625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.762658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.762842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.762876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.763135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.763168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.763381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.763416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.763685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.763718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.763849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.763883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.764088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.764120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.764423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.764459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.764663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.764696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.764945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.764980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.765183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.765216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.765479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.765513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.765715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.765749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.765931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.765966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.766231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.766270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.766549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.766584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.766864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.766898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.767186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.767219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.767405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.767440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.767647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.338 [2024-11-18 13:10:01.767679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.338 qpair failed and we were unable to recover it. 00:27:04.338 [2024-11-18 13:10:01.767955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.767989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.768243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.768277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.768508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.768543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.768820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.768854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.769042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.769076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.769342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.769387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.769594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.769628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.769892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.769925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.770189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.770224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.770522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.770557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.770750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.770783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.770986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.771019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.771223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.771257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.771491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.771527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.771725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.771757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.772067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.772103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.772312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.772346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.772548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.772582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.772850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.772884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.773085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.773120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.773242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.773276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.773554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.773590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.773815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.773849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.774102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.774137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.774413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.774449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.774583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.774619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.774920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.774955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.775253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.775288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.775428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.775462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.775745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.775779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.775963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.775996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.776265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.776300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.776580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.776615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.776823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.776859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.777102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.777135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.777331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.777385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.777579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.777611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.777737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.339 [2024-11-18 13:10:01.777772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.339 qpair failed and we were unable to recover it. 00:27:04.339 [2024-11-18 13:10:01.778050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.778085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.778280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.778317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.778453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.778488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.778741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.778777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.778960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.778993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.779224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.779258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.779489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.779525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.779741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.779775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.779980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.780013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.780217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.780251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.780451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.780486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.780647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.780682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.780863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.780898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.781077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.781113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.781405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.781441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.781645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.781680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.781963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.781998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.782210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.782245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.782427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.782462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.782717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.782750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.782963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.782997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.783147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.783181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.783375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.783412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.783642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.783675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.783889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.783929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.784123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.784157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.784344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.784407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.784558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.784591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.784871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.784905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.785092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.785125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.785260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.785296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.785582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.785616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.785804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.785837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.786103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.786137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.786418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.786453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.786602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.786635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.786836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.786872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.340 qpair failed and we were unable to recover it. 00:27:04.340 [2024-11-18 13:10:01.787086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.340 [2024-11-18 13:10:01.787121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.787324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.787369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.787574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.787608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.787791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.787828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.787968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.788001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.788203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.788238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.788508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.788542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.788729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.788761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.789016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.789053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.789246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.789279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.789561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.789597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.789796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.789830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.790080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.790114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.790234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.790268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.790523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.790559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.790844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.790878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.791086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.791120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.791395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.791432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.791698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.791733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.792022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.792056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.792332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.792406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.792667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.792701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.792930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.792963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.793097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.793132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.793406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.793443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.793627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.793660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.793787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.793822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.794039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.794072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.794340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.794394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.794692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.341 [2024-11-18 13:10:01.794727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.341 qpair failed and we were unable to recover it. 00:27:04.341 [2024-11-18 13:10:01.795016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.795051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.795181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.795215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.795410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.795447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.795648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.795684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.795960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.795994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.796187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.796221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.796368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.796403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.796611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.796645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.796870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.796903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.797182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.797217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.797427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.797462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.797667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.797701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.797922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.797958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.798238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.798271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.798485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.798521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.798655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.798689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.798866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.798901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.799176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.799209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.799395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.799430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.799565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.799598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.799808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.799841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.799979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.800012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.800316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.800350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.800634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.800668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.800851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.800885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.801139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.801179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.801432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.801469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.801672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.801707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.801891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.801926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.802115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.802148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.802361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.802398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.802606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.802639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.802848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.802882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.803086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.803121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.803302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.803335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.803474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.803509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.803731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.803765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.803972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.342 [2024-11-18 13:10:01.804004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.342 qpair failed and we were unable to recover it. 00:27:04.342 [2024-11-18 13:10:01.804257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.804291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.804504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.804540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.804831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.804864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.805158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.805191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.805463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.805498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.805790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.805823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.806012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.806045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.806224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.806258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.806380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.806415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.806620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.806653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.806926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.806960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.807211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.807244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.807466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.807501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.807754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.807787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.808069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.808103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.808389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.808426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.808698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.808731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.808953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.808987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.809288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.809322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.809530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.809563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.809764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.809798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.810074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.810107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.810394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.810429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.810703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.810736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.810921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.810956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.811227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.811259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.811530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.811564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.811791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.811824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.812055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.812095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.812296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.812329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.812637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.812671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.812935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.812968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.813243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.813277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.813574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.813609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.813874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.813907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.814201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.814235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.814375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.814412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.814609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.814642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.343 qpair failed and we were unable to recover it. 00:27:04.343 [2024-11-18 13:10:01.814799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.343 [2024-11-18 13:10:01.814833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.815106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.815139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.815319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.815365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.815644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.815679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.815895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.815929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.816054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.816088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.816293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.816327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.816546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.816581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.816714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.816747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.816879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.816913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.817117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.817151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.817365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.817401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.817655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.817688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.817879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.817915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.818192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.818226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.818441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.818476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.818655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.818687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.818960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.819000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.819217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.819251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.819498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.819533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.819834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.819866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.820167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.820201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.820495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.820530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.820798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.820830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.821127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.821161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.821344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.821405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.821656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.821689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.821993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.822026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.822292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.822325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.822649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.822682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.822962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.822995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.823275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.823310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.823591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.823625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.823850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.823883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.824141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.824174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.824285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.824319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.824587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.824621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.824754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.824787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.344 [2024-11-18 13:10:01.825059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.344 [2024-11-18 13:10:01.825093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.344 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.825295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.825329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.825617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.825652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.825885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.825918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.826059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.826092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.826342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.826388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.826604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.826638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.826923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.826956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.827236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.827271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.827453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.827489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.827687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.827720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.827986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.828019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.828138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.828171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.828451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.828486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.828749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.828782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.829100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.829134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.829394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.829429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.829715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.829749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.829957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.829991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.830198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.830231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.830438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.830479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.830733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.830767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.831060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.831094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.831287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.831320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.831537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.831572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.831844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.831877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.832011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.832045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.832322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.832367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.832648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.832682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.832954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.832987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.833166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.833199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.833474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.833509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.833703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.833736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.833987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.834020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.345 qpair failed and we were unable to recover it. 00:27:04.345 [2024-11-18 13:10:01.834325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.345 [2024-11-18 13:10:01.834369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.834648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.834682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.834958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.834990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.835173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.835207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.835393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.835428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.835567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.835599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.835855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.835889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.836190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.836223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.836501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.836536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.836820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.836853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.837058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.837092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.837234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.837267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.837472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.837508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.837797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.837830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.838040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.838075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.838270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.838303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.838510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.838545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.838799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.838832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.839110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.839145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.839337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.839380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.839633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.839668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.839847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.839880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.840164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.840197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.840460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.840494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.840728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.840760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.840956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.840991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.841274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.841307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.841619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.841655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.841929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.841962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.842246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.842280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.842559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.842593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.842793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.842826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.843026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.843060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.843315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.843348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.843573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.843607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.843794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.843827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.844080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.844113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.844417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.844452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.346 [2024-11-18 13:10:01.844712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.346 [2024-11-18 13:10:01.844745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.346 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.845043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.845078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.845345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.845406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.845609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.845643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.845840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.845873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.846060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.846092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.846291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.846325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.846522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.846555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.846756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.846790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.846914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.846947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.847222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.847257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.847456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.847491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.847743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.847777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.848049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.848083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.848288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.848321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.848534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.848569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.848783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.848823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.849097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.849131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.849340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.849385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.849589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.849623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.849902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.849935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.850188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.850222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.850470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.850505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.850758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.850791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.851083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.851117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.851332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.851376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.851617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.851651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.851853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.851885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.852071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.852105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.852303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.852336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.852663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.852698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.852969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.853003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.853276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.853310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.853598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.853633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.853908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.853943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.854230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.854264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.854539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.854574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.854795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.347 [2024-11-18 13:10:01.854829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.347 qpair failed and we were unable to recover it. 00:27:04.347 [2024-11-18 13:10:01.855152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.855186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.855299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.855332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.855532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.855566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.855773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.855806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.856000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.856034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.856217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.856251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.856538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.856573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.856831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.856865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.857010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.857045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.857255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.857289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.857496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.857531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.857806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.857839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.858127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.858161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.858436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.858470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.858663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.858696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.858959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.858994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.859199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.859233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.859433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.859468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.859668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.859703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.860014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.860050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.860327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.860371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.860649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.860682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.860883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.860917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.861096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.861130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.861322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.861365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.861640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.861674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.861918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.861952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.862145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.862178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.862456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.862513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.862777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.862812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.863091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.863123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.863410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.863445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.863723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.863758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.864060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.864093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.864361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.864397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.864597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.864630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.864940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.864974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.865109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.348 [2024-11-18 13:10:01.865143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.348 qpair failed and we were unable to recover it. 00:27:04.348 [2024-11-18 13:10:01.865420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.865456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.865765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.865798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.865998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.866031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.866308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.866342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.866648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.866681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.866941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.866975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.867122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.867155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.867341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.867385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.867662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.867702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.867882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.867916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.868189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.868223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.868499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.868534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.868728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.868761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.868955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.868990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.869191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.869224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.869523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.869558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.869844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.869878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.870158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.870191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.870472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.870507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.870786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.870819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.871076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.871111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.871302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.871335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.871540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.871575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.871832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.871865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.872070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.872103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.872307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.872340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.872607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.872642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.872773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.872806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.873005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.873038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.873318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.873363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.873661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.873694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.873903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.873937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.874160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.874194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.874377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.874413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.874593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.874626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.874823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.874856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.874982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.875015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.875218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.875251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.349 [2024-11-18 13:10:01.875593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.349 [2024-11-18 13:10:01.875628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.349 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.875906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.875939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.876140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.876175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.876456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.876490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.876716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.876749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.877024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.877059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.877311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.877344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.877652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.877686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.877876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.877908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.878111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.878145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.878276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.878309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.878574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.878615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.878818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.878851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.879031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.879065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.879268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.879303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.879458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.879491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.879700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.879733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.879912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.879947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.880156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.880189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.880460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.880496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.880756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.880789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.880999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.881033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.881308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.881342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.881657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.881692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.881898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.881931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.882209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.882243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.882399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.882434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.882709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.882743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.882947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.882980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.883180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.883214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.883495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.883530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.883834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.883867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.884099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.884133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.884439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.884474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.884747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.350 [2024-11-18 13:10:01.884782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.350 qpair failed and we were unable to recover it. 00:27:04.350 [2024-11-18 13:10:01.885034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.885068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.885321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.885364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.885549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.885581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.885856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.885896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.886104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.886138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.886324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.886366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.886583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.886615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.886921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.886954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.887215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.887249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.887507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.887543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.887723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.887757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.888009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.888044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.888165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.888200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.888451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.888487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.888760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.888793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.889076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.889110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.889309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.889342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.889591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.889626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.889826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.889860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.890163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.890196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.890475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.890510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.890793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.890826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.891110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.891144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.891427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.891461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.891665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.891697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.891998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.892032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.892217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.892250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.892553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.892588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.892817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.892850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.892994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.893028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.893245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.893278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.893498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.893532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.893736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.893769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.893948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.893981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.894246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.894279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.894562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.894597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.894796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.894828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.895009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.895042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.351 [2024-11-18 13:10:01.895251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.351 [2024-11-18 13:10:01.895286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.351 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.895566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.895600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.895797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.895832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.896035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.896067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.896249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.896283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.896549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.896583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.896790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.896830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.897109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.897142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.897374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.897410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.897522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.897556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.897674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.897709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.897936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.897969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.898247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.898282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.898571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.898605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.898893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.898928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.899207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.899239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.899528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.899562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.899859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.899893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.900104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.900136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.900268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.900302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.900603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.900639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.900915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.900950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.901206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.901239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.901518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.901553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.901752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.901785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.902057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.902091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.902293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.902326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.902540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.902575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.902846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.902879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.903022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.903055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.903279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.903312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.903541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.903576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.903856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.903888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.904073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.904112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.904397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.904433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.904712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.904746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.352 [2024-11-18 13:10:01.904927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.352 [2024-11-18 13:10:01.904960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.352 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.905154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.905188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.905380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.905415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.905645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.905679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.905812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.905845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.905995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.906028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.906233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.906268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.906523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.906558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.906741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.906774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.907030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.907064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.907268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.907301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.907600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.907637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.907768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.907801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.908101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.908135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.908262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.908295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.908507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.908544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.908729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.908762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.909034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.909069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.909322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.909368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.909665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.909699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.909974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.910007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.910295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.910328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.910479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.910518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.910766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.910799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.910986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.911019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.911310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.911345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.911562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.911595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.911877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.911911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.912192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.912227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.912452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.912488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.912691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.912723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.912841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.912874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.913149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.913183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.913483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.913517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.913715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.913748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.913929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.913963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.914216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.914248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.353 qpair failed and we were unable to recover it. 00:27:04.353 [2024-11-18 13:10:01.914521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.353 [2024-11-18 13:10:01.914557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.914833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.914873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.915062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.915096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.915405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.915440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.915741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.915773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.915927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.915961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.916234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.916268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.916474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.916510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.916706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.916738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.917038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.917072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.917369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.917404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.917634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.917667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.917882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.917916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.918193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.918227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.918450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.918487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.918737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.918771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.918955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.918989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.919125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.919158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.919361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.919395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.919602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.919636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.919934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.919968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.920194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.920228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.920413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.920447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.920701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.920735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.920951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.920985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.921244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.921278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.921501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.921536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.921728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.921763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.921958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.921996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.922194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.922228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.922416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.922451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.922730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.922763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.922946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.922980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.923177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.923210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.923415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.923450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.923663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.923696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.923971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.354 [2024-11-18 13:10:01.924006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.354 qpair failed and we were unable to recover it. 00:27:04.354 [2024-11-18 13:10:01.924285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.924317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.924601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.924636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.924841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.924875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.925129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.925162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.925345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.925393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.925600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.925635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.925903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.925936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.926216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.926250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.926532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.926568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.926822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.926855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.927067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.927100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.927372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.927409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.927695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.927728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.928002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.928035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.928231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.928266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.928470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.928504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.928718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.928751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.928942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.928976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.929091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.929123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.929409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.929444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.929741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.929775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.930066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.930101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.930296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.930329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.930523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.930557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.930833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.930865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.931152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.931186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.931376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.931412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.931674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.931708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.931933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.931967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.932219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.932253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.355 qpair failed and we were unable to recover it. 00:27:04.355 [2024-11-18 13:10:01.932390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.355 [2024-11-18 13:10:01.932425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.932677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.932710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.933009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.933051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.933313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.933348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.933635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.933669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.933910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.933942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.934126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.934160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.934417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.934452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.934651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.934683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.934868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.934902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.935155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.935188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.935493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.935528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.935790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.935823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.936122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.936157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.936427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.936461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.936739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.936772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.937060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.937095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.937316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.937349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.937561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.937596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.937873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.937906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.938185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.938218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.938508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.938543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.938812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.938846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.939115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.939149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.939406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.939442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.939731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.939765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.939915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.939948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.940202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.940234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.940448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.940484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.940753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.940786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.941098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.941134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.941388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.941423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.941618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.941653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.941881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.941915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.942148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.942181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.942395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.356 [2024-11-18 13:10:01.942431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.356 qpair failed and we were unable to recover it. 00:27:04.356 [2024-11-18 13:10:01.942691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.942726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.943045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.943080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.943378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.943414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.943611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.943646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.943929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.943963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.944166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.944201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.944454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.944489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.944634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.944670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.944949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.944983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.945286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.945322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.945616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.945651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.945976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.946010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.946285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.946318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.946455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.946490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.946715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.946748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.947006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.947039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.947383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.947421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.947699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.947734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.947946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.947981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.948187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.948221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.948481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.948517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.948750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.948785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.948994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.949028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.949257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.949291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.949561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.949596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.949790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.949824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.950108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.950142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.950423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.950459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.950665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.950699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.950982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.951016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.951217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.951252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.951456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.951492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.951696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.951731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.951937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.951971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.952224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.952263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.952492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.952528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.357 [2024-11-18 13:10:01.952802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.357 [2024-11-18 13:10:01.952836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.357 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.952967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.953001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.953279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.953313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.953506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.953541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.953799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.953833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.954047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.954081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.954278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.954313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.954580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.954615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.954804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.954838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.955026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.955060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.955340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.955394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.955651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.955685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.955901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.955936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.956195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.956228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.956484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.956520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.956714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.956749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.956889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.956923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.957127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.957161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.957363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.957400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.957655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.957690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.957873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.957907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.958158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.958192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.958474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.958511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.958709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.958743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.958871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.958905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.959182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.959217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.959405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.959440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.959718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.959752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.960038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.960072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.960254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.960288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.960588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.960623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.960763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.960798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.961075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.961110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.961225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.961258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.961553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.961589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.961706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.961741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.962023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.962057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.358 [2024-11-18 13:10:01.962338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.358 [2024-11-18 13:10:01.962384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.358 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.962500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.962535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.962729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.962775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.962979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.963014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.963294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.963328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.963498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.963533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.963750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.963785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.964064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.964099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.964280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.964313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.964479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.964515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.964641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.964676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.964870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.964904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.965159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.965193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.965313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.965347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.965494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.965528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.965785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.965818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.966029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.966064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.966287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.966322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.966572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.966609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.966825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.966859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.966993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.967027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.967237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.967271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.967456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.967491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.967754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.967787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.967898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.967930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.968142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.968175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.968373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.968408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.968604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.968638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.968767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.968802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.969031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.969072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.969272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.969306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.969501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.969535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.969732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.969766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.970020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.970054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.970258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.970292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.359 [2024-11-18 13:10:01.970421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.359 [2024-11-18 13:10:01.970456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.359 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.970602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.970638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.970843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.970877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.971059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.971093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.971393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.971430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.971623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.971658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.971867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.971902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.972026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.972060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.972300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.972335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.972600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.972635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.972909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.972943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.973062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.973096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.973240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.973274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.973559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.973594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.973800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.973834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.973952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.973986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.974197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.974232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.974420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.974456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.974732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.974768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.974977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.975011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.975161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.975196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.975386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.975422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.975706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.975742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.975852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.975886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.976010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.976045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.976172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.976207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.976484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.976520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.976780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.976815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.977029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.977063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.977196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.977230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.977484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.977519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.977651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.977685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.977866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.977902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.978187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.978223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.978370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.978406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.978544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.978584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.978762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.360 [2024-11-18 13:10:01.978796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.360 qpair failed and we were unable to recover it. 00:27:04.360 [2024-11-18 13:10:01.979021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.979056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.979278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.979313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.979554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.979591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.979795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.979830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.980032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.980066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.980319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.980367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.980645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.980680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.980872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.980906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.981106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.981139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.981389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.981424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.981633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.981667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.981793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.981827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.981961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.981995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.982118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.982152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.982280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.982313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.982579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.982613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.982891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.982926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.983123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.983158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.983348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.983395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.983663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.983697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.983827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.983862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.984143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.984176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.984435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.984470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.984672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.984706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.984910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.984943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.985195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.985235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.985372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.985408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.985627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.985661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.985780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.985814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.985953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.985986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.986128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.986162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.986300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.986334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.986655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.986689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.986990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.987024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.987224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.987258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.987416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.987452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.987574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.987607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.361 [2024-11-18 13:10:01.987886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.361 [2024-11-18 13:10:01.987920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.361 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.988114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.988147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.988285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.988320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.988614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.988650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.988835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.988868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.989071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.989104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.989290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.989325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.989481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.989514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.989787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.989820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.989965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.989999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.990229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.990262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.990479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.990515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.990765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.990799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.990991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.991024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.991224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.991257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.991476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.991513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.991644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.991677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.991789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.991822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.991954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.991988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.992106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.992140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.992367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.992402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.992605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.992639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.992827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.992860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.993042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.993075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.993256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.993290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.993450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.993486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.993675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.993708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.993907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.993941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.994052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.994086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.994285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.994325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.994461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.994494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.994778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.994813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.995076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.995109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.995406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.995442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.995626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.995659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.995786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.995819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.996017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.362 [2024-11-18 13:10:01.996050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.362 qpair failed and we were unable to recover it. 00:27:04.362 [2024-11-18 13:10:01.996311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:01.996344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:01.996554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:01.996588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:01.996698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:01.996731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:01.996932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:01.996966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:01.997237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:01.997270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:01.997474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:01.997509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:01.997817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:01.997852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:01.998103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:01.998136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:01.998442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:01.998476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:01.998736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:01.998769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:01.998948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:01.998982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:01.999232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:01.999265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:01.999538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:01.999573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:01.999827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:01.999860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.000038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.000072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.000257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.000291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.000512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.000546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.000742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.000776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.000957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.000991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.001187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.001228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.001439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.001475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.001668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.001702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.001927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.001960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.002152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.002186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.002485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.002520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.002698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.002731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.002977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.003011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.003211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.003245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.003494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.003529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.003708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.003742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.003959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.003993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.004244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.004278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.004502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.004537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.004734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.004768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.004944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.004978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.005202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.363 [2024-11-18 13:10:02.005237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.363 qpair failed and we were unable to recover it. 00:27:04.363 [2024-11-18 13:10:02.005437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.005471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.005738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.005771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.005908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.005942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.006049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.006082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.006422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.006457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.006708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.006742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.007000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.007034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.007179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.007213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.007409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.007445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.007631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.007665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.007941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.007974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.008246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.008280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.008575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.008610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.008813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.008846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.009034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.009067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.009270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.009304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.009512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.009547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.009817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.009851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.010032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.010066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.010200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.010235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.010515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.010551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.010690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.010725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.010852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.010887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.011077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.011110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.011250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.011296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.011464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.011499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.011654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.011688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.011896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.011929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.364 [2024-11-18 13:10:02.012198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.364 [2024-11-18 13:10:02.012232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.364 qpair failed and we were unable to recover it. 00:27:04.644 [2024-11-18 13:10:02.012421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.644 [2024-11-18 13:10:02.012458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.644 qpair failed and we were unable to recover it. 00:27:04.644 [2024-11-18 13:10:02.012709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.644 [2024-11-18 13:10:02.012745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.644 qpair failed and we were unable to recover it. 00:27:04.644 [2024-11-18 13:10:02.013113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.644 [2024-11-18 13:10:02.013146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.644 qpair failed and we were unable to recover it. 00:27:04.644 [2024-11-18 13:10:02.013403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.644 [2024-11-18 13:10:02.013437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.644 qpair failed and we were unable to recover it. 00:27:04.644 [2024-11-18 13:10:02.013643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.644 [2024-11-18 13:10:02.013677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.644 qpair failed and we were unable to recover it. 00:27:04.644 [2024-11-18 13:10:02.013946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.644 [2024-11-18 13:10:02.013980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.644 qpair failed and we were unable to recover it. 00:27:04.644 [2024-11-18 13:10:02.014182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.644 [2024-11-18 13:10:02.014216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.644 qpair failed and we were unable to recover it. 00:27:04.644 [2024-11-18 13:10:02.014400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.644 [2024-11-18 13:10:02.014434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.644 qpair failed and we were unable to recover it. 00:27:04.644 [2024-11-18 13:10:02.014639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.644 [2024-11-18 13:10:02.014673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.644 qpair failed and we were unable to recover it. 00:27:04.644 [2024-11-18 13:10:02.014882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.644 [2024-11-18 13:10:02.014916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.644 qpair failed and we were unable to recover it. 00:27:04.644 [2024-11-18 13:10:02.015145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.644 [2024-11-18 13:10:02.015179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.644 qpair failed and we were unable to recover it. 00:27:04.644 [2024-11-18 13:10:02.015434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.644 [2024-11-18 13:10:02.015467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.644 qpair failed and we were unable to recover it. 00:27:04.644 [2024-11-18 13:10:02.015594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.644 [2024-11-18 13:10:02.015628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.644 qpair failed and we were unable to recover it. 00:27:04.644 [2024-11-18 13:10:02.015789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.015823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.016069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.016103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.016287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.016321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.016490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.016525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.016788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.016822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.017123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.017157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.017420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.017456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.017599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.017633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.017768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.017802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.018001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.018036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.018321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.018366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.018571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.018605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.018807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.018842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.018969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.019003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.019259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.019293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.019618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.019653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.019768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.019801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.019998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.020033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.020166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.020200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.020475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.020511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.020648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.020682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.020794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.020827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.021147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.021180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.021422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.021459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.021771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.021805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.022100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.022134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.022404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.022439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.022609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.022643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.022849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.022884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.023089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.023124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.023308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.023343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.023545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.023579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.023762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.023795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.023922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.023958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.024223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.024258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.024404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.024439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.024555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.024588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.645 [2024-11-18 13:10:02.024823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.645 [2024-11-18 13:10:02.024858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.645 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.025113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.025147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.025408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.025443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.025660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.025695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.025847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.025883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.026181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.026215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.026432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.026466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.026663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.026698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.026854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.026889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.027114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.027148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.027290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.027324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.027476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.027510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.027715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.027749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.028027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.028067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.028251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.028285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.028499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.028536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.028692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.028726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.028918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.028952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.029137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.029172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.029396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.029434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.029691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.029725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.029909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.029942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.030196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.030232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.030445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.030480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.030645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.030680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.030885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.030919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.031148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.031181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.031388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.031424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.031584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.031618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.031876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.031910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.032123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.032157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.032362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.032398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.032609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.032643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.032827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.032864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.033090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.033124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.033340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.033389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.033667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.033703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.033996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.034031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.034315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.646 [2024-11-18 13:10:02.034350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.646 qpair failed and we were unable to recover it. 00:27:04.646 [2024-11-18 13:10:02.034550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.034584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.034801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.034835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.035028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.035063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.035210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.035244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.035396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.035433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.035577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.035613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.035821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.035855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.036007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.036040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.036268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.036303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.036445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.036483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.036685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.036719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.036851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.036885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.037178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.037212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.037446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.037482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.037668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.037702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.037998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.038033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.038185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.038219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.038424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.038461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.038657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.038691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.038844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.038878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.039071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.039105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.039304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.039339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.039542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.039575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.039809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.039846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.040076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.040111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.040254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.040288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.040503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.040538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.040749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.040783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.041051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.041085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.041330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.041378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.041529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.041562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.041750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.041784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.042005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.042039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.042238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.042272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.042477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.042515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.042640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.042674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.042899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.042934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.043191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.647 [2024-11-18 13:10:02.043224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.647 qpair failed and we were unable to recover it. 00:27:04.647 [2024-11-18 13:10:02.043376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.043413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.043597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.043629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.043889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.043924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.044056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.044090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.044374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.044418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.044577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.044611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.044836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.044869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.045192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.045226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.045443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.045477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.045683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.045719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.045917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.045951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.046152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.046185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.046321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.046372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.046549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.046585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.046712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.046745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.046963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.046998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.047282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.047317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.047617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.047653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.047857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.047892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.048097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.048131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.048388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.048423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.048612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.048647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.048864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.048898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.049173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.049207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.049484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.049519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.049719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.049753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.050042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.050077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.050336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.050383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.050593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.050628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.050835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.050869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.051117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.051153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.051432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.051468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.051704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.051738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.051854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.051888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.052044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.052079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.052305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.052341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.052550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.052585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.052717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.052752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.648 qpair failed and we were unable to recover it. 00:27:04.648 [2024-11-18 13:10:02.052970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.648 [2024-11-18 13:10:02.053006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.053291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.053324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.053558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.053593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.053780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.053815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.054032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.054066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.054293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.054327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.054553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.054587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.054873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.054913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.055131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.055165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.055478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.055514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.055794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.055828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.055981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.056015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.056168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.056203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.056396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.056430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.056659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.056692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.056962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.056997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.057120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.057154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.057274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.057308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.057535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.057572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.057755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.057791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.058010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.058046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.058232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.058265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.058449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.058484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.058715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.058749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.058942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.058976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.059108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.059143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.059339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.059387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.059581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.059614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.059874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.059909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.060084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.060118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.060444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.060479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.060706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.060740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.060964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.649 [2024-11-18 13:10:02.060997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.649 qpair failed and we were unable to recover it. 00:27:04.649 [2024-11-18 13:10:02.061250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.061283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.061565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.061612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.061819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.061853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.062109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.062146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.062374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.062413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.062605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.062638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.062825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.062858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.063058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.063093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.063374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.063409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.063616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.063650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.063852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.063886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.064096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.064129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.064328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.064379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.064519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.064556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.064774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.064810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.064974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.065010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.065213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.065246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.065383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.065420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.065538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.065571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.065709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.065744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.065949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.065987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.066197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.066230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.066536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.066570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.066775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.066809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.066940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.066974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.067101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.067135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.067335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.067383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.067661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.067694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.067832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.067866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.068165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.068199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.068395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.068430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.068688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.068724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.068978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.069012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.069198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.069231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.069378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.069415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.069621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.069656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.069874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.069910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.070092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.070126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.650 [2024-11-18 13:10:02.070416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.650 [2024-11-18 13:10:02.070450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.650 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.070705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.070742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.070951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.070986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.071181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.071216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.071435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.071477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.071589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.071625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.071824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.071859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.072072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.072107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.072314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.072349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.072631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.072665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.072853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.072887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.073168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.073202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.073484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.073520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.073706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.073742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.073859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.073893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.074216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.074250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.074412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.074449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.074644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.074679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.074880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.074915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.075114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.075147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.075330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.075387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.075511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.075544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.075702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.075735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.075969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.076003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.076125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.076159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.076347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.076394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.076601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.076637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.076843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.076876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.076993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.077027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.077231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.077266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.077479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.077515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.077740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.077779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.077966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.077999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.078306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.078342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.078565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.078599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.078851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.078886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.079123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.079157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.079292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.079328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.079596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.651 [2024-11-18 13:10:02.079631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.651 qpair failed and we were unable to recover it. 00:27:04.651 [2024-11-18 13:10:02.079827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.079860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.079997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.080031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.080304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.080339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.080490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.080524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.080656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.080693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.080996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.081030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.081225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.081261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.081480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.081517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.081798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.081831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.082107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.082140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.082432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.082469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.082742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.082776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.082998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.083033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.083169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.083205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.083479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.083518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.083667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.083701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.083989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.084022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.084283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.084317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.084516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.084553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.084807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.084841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.085076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.085109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.085246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.085280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.085510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.085546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.085694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.085729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.085867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.085902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.086181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.086214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.086427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.086461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.086718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.086751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.086889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.086924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.087135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.087168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.087305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.087338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.087626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.087661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.087846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.087880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.088064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.088104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.088386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.088423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.088714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.088748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.088891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.088927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.089203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.089238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.652 qpair failed and we were unable to recover it. 00:27:04.652 [2024-11-18 13:10:02.089421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.652 [2024-11-18 13:10:02.089457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.089628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.089663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.089886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.089920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.090174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.090209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.090488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.090522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.090717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.090753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.090887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.090922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.091200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.091236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.091435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.091471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.091674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.091709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.091909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.091944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.092130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.092166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.092370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.092406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.092636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.092669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.092855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.092891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.093121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.093155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.093384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.093422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.093607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.093641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.093909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.093944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.094144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.094178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.094432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.094467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.094586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.094619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.094909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.094943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.095219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.095253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.095383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.095418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.095623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.095657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.095801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.095836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.095965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.095999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.096257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.096292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.096431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.096463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.096646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.096679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.096876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.096911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.097117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.097152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.097405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.097439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.653 [2024-11-18 13:10:02.097564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.653 [2024-11-18 13:10:02.097601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.653 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.097798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.097831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.098054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74daf0 is same with the state(6) to be set 00:27:04.654 [2024-11-18 13:10:02.098459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.098540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.098779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.098818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.099107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.099142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.099274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.099309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.099463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.099498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.099727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.099761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.099916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.099950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.100082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.100118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.100257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.100290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.100432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.100465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.100672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.100706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.100900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.100935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.101158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.101191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.101403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.101441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.101569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.101603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.101785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.101819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.102115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.102149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.102405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.102441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.102571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.102606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.102887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.102922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.103142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.103176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.103320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.103362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.103500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.103534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.103827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.103862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.104083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.104116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.104242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.104277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.104463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.104505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.104700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.104735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.104999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.105034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.105310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.105346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.105541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.105575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.105772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.105806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.106004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.106040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.106151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.106185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.106304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.106337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.654 qpair failed and we were unable to recover it. 00:27:04.654 [2024-11-18 13:10:02.106499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.654 [2024-11-18 13:10:02.106533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.106720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.106753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.107014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.107046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.107247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.107281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.107429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.107464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.107660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.107697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.107987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.108019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.108153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.108188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.108376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.108411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.108667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.108700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.108843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.108878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.109070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.109104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.109375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.109411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.109592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.109626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.109826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.109863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.110058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.110093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.110295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.110330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.110552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.110588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.110775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.110810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.111093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.111128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.111383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.111420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.111630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.111663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.111917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.111952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.112079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.112112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.112314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.112348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.112652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.112686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.112888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.112923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.113129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.113163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.113373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.113409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.113542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.113575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.113833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.113867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.114006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.114045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.114198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.114232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.114444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.114478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.114746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.114781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.114967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.115001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.115197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.115231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.655 [2024-11-18 13:10:02.115373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.655 [2024-11-18 13:10:02.115410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.655 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.115662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.115694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.115826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.115859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.116132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.116165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.116375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.116410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.116613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.116645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.116832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.116866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.116986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.117019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.117316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.117350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.117501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.117534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.117738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.117772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.118050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.118083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.118211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.118244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.118379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.118412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.118635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.118668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.118862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.118898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.119078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.119111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.119242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.119276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.119419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.119456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.119606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.119641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.119854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.119887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.120038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.120071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.120257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.120289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.120599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.120635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.120841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.120876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.121008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.121040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.121226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.121260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.121464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.121499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.121614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.121647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.121836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.121869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.122153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.122187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.122415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.122450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.122653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.122685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.122909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.122943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.123137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.123177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.123378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.123413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.123615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.123648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.123901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.123935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.656 qpair failed and we were unable to recover it. 00:27:04.656 [2024-11-18 13:10:02.124139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.656 [2024-11-18 13:10:02.124173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.124387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.124421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.124697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.124730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.124859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.124894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.125096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.125129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.125243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.125277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.125560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.125596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.125785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.125818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.126075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.126107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.126239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.126271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.126493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.126529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.126723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.126757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.126948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.126980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.127123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.127156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.127274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.127308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.127596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.127629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.127827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.127862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.128167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.128202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.128409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.128443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.128567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.128601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.128783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.128816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.128997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.129031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.129154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.129188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.129469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.129506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.129656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.129689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.129969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.130002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.130124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.130156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.130348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.130393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.130622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.130656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.130854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.130887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.131070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.131102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.131212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.131243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.131426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.131459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.131589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.131620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.131749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.131782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.132031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.132064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.132247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.132287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.132502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.132534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.657 qpair failed and we were unable to recover it. 00:27:04.657 [2024-11-18 13:10:02.132743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.657 [2024-11-18 13:10:02.132777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.132973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.133007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.133221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.133254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.133462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.133498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.133615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.133647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.133847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.133880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.134131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.134163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.134281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.134314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.134452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.134485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.134666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.134700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.134882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.134915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.135086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.135118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.135315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.135347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.135565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.135598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.135825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.135858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.136067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.136099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.136284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.136317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.136530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.136564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.136805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.136838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.136982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.137016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.137129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.137162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.137338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.137404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.137541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.137575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.137694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.137726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.137997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.138030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.138182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.138216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.138439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.138472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.138598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.138632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.138889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.138921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.139051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.139083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.139213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.139246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.139385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.139421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.139684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.139716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.139849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.658 [2024-11-18 13:10:02.139882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.658 qpair failed and we were unable to recover it. 00:27:04.658 [2024-11-18 13:10:02.140004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.140036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.140146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.140178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.140382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.140417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.140543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.140576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.140759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.140798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.141004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.141039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.141179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.141210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.141319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.141359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.141576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.141608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.141786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.141819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.142002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.142035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.142153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.142186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.142320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.142364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.142493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.142526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.142659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.142692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.142862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.142894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.143145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.143178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.143386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.143422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.143629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.143661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.143905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.143938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.144150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.144184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.144387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.144421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.144606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.144643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.144840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.144874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.145000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.145032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.145159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.145192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.145397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.145432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.145618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.145650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.145844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.145879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.146132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.146165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.146289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.146321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.146485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.146520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.146698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.146731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.146908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.146942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.147205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.147239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.147487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.147522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.147715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.147747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.147928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.659 [2024-11-18 13:10:02.147961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.659 qpair failed and we were unable to recover it. 00:27:04.659 [2024-11-18 13:10:02.148073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.148106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.148239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.148272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.148403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.148437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.148559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.148592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.148717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.148749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.148897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.148930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.149126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.149164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.149295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.149328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.149548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.149582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.149781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.149813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.149969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.150001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.150245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.150278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.150432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.150467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.150579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.150613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.150733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.150765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.150956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.150988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.151238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.151271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.151467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.151501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.151605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.151638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.151836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.151867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.152083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.152116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.152312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.152344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.152483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.152516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.152644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.152678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.152790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.152822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.152996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.153029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.153252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.153285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.153430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.153464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.153596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.153628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.153899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.153932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.154142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.154175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.154369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.154403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.154524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.154559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.154698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.154733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.154945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.154977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.155171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.155203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.155417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.155452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.660 [2024-11-18 13:10:02.155696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.660 [2024-11-18 13:10:02.155728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.660 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.155850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.155883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.156015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.156048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.156226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.156260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.156434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.156469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.156668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.156701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.156896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.156928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.157064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.157096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.157284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.157316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.157456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.157495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.157765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.157797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.157919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.157952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.158158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.158191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.158390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.158424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.158636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.158669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.158844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.158876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.159001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.159034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.159143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.159176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.159301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.159333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.159471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.159504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.159754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.159786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.159961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.159995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.160174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.160207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.160336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.160381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.160586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.160619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.160732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.160781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.160964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.160996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.161180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.161212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.161391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.161427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.161673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.161706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.161921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.161954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.162078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.162110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.162368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.162402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.162540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.162572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.162687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.162720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.162914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.162947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.163207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.163282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.163435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.163473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.661 [2024-11-18 13:10:02.163601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.661 [2024-11-18 13:10:02.163637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.661 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.163881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.163914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.164125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.164158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.164402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.164437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.164625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.164657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.164930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.164962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.165162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.165194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.165315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.165347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.165537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.165576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.165819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.165864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.166014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.166061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.166296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.166371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.166702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.166751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.167036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.167087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.167380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.167422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.167604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.167637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.167789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.167822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.167957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.167991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.168190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.168223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.168431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.168465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.168653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.168688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.168878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.168912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.169103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.169135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.169326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.169369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.169497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.169528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.169659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.169691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.169832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.169866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.170058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.170092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.170278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.170312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.170522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.170557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.170739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.170771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.170877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.170910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.171121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.171155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.171280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.171313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.171459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.171491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.171602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.662 [2024-11-18 13:10:02.171633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.662 qpair failed and we were unable to recover it. 00:27:04.662 [2024-11-18 13:10:02.171758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.171791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.171978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.172012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.172160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.172194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.172492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.172525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.172725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.172757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.172930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.172963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.173097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.173129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.173236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.173267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.173462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.173497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.173621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.173653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.173781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.173813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.173935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.173968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.174092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.174125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.174265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.174298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.174488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.174522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.174710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.174742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.174997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.175030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.175220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.175253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.175504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.175536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.175644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.175675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.175879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.175909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.176036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.176069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.176339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.176382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.176571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.176604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.176781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.176814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.176997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.177029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.177321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.177367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.177488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.177520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.177646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.177679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.177807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.177841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.177989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.178022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.178146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.178178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.178939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.178991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.179318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.179367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.179484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.179518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.179768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.179800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.663 qpair failed and we were unable to recover it. 00:27:04.663 [2024-11-18 13:10:02.179989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.663 [2024-11-18 13:10:02.180022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.180162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.180195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.180388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.180421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.180569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.180600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.180779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.180812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.181715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.181760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.185383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.185453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.185602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.185637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.185794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.185826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.186005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.186039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.186295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.186330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.186547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.186582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.186728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.186761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.187003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.187041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.187228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.187265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.187468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.187505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.187640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.187673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.187874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.187908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.188035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.188069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.188218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.188250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.188386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.188421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.188606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.188642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.188768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.188803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.188936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.188970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.189150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.189187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.189381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.189417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.189536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.189569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.189686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.189716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.189850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.189883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.190034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.190067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.190261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.190293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.190554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.190591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.190844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.190879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.191014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.191046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.191177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.191209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.191350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.191401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.191581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.191612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.191738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.664 [2024-11-18 13:10:02.191770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.664 qpair failed and we were unable to recover it. 00:27:04.664 [2024-11-18 13:10:02.192010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.192042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.192213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.192245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.192362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.192397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.192527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.192559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.192679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.192712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.192901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.192934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.193173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.193205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.193388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.193413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.193588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.193616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.193789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.193813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.194057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.194081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.194247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.194272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.194375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.194398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.194520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.194543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.194702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.194726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.194901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.194925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.195026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.195048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.195202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.195275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.195447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.195486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.195603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.195630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.195788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.195812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.195914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.195935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.196035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.196058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.196172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.196196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.196413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.196439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.196535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.196558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.196730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.196754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.196871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.196895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.197005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.197027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.197126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.197150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.197250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.197272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.197436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.197460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.197563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.197586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.197685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.197710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.197810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.197833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.198000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.198024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.198112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.665 [2024-11-18 13:10:02.198133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.665 qpair failed and we were unable to recover it. 00:27:04.665 [2024-11-18 13:10:02.198243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.198267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.198453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.198477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.198582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.198605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.198712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.198736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.198823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.198846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.198932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.198954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.199071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.199105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.199213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.199241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.199367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.199402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.199523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.199553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.199734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.199758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.199856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.199903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.200015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.200045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.200162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.200196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.200398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.200433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.200691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.200723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.200843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.200865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.200965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.200988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.201217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.201241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.201492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.201516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.201609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.201631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.201813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.201843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.201982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.202014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.202134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.202164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.202287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.202319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.202580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.202613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.202788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.202820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.203018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.203042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.203208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.203233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.203329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.203358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.203466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.203489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.203656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.203696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.203823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.203847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.204023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.204048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.204204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.204229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.204432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.204457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.204558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.204582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.666 [2024-11-18 13:10:02.204761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.666 [2024-11-18 13:10:02.204794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.666 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.204914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.204950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.205138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.205170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.205388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.205422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.205549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.205581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.205696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.205728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.205837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.205869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.206050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.206076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.206248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.206280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.206403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.206437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.206553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.206585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.206773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.206812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.206911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.206936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.207043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.207068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.207169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.207199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.207364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.207391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.207575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.207600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.207706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.207730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.207915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.207941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.208052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.208077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.208184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.208209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.208308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.208332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.208605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.208629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.208739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.208763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.208857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.208883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.209058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.209082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.209253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.209287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.209469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.209496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.209684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.209709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.209803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.209828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.209941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.209966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.210140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.210166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.210282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.210314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.667 qpair failed and we were unable to recover it. 00:27:04.667 [2024-11-18 13:10:02.210438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.667 [2024-11-18 13:10:02.210471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.210621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.210653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.210852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.210884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.211061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.211093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.211281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.211312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.211623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.211667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.211864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.211889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.212118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.212144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.212270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.212296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.212459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.212485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.212665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.212697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.212885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.212916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.213036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.213068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.213195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.213227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.213345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.213406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.213574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.213607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.213781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.213812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.213929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.213961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.214070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.214100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.214372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.214406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.214545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.214578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.214750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.214788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.214900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.214932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.215121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.215153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.215287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.215319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.215452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.215486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.215771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.215803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.215905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.215936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.216197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.216228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.216422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.216454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.216580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.216614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.216885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.216917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.217045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.217077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.217211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.217244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.217498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.217531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.217794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.217826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.218019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.218050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.218236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.218269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.668 qpair failed and we were unable to recover it. 00:27:04.668 [2024-11-18 13:10:02.218396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.668 [2024-11-18 13:10:02.218428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.218547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.218578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.218711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.218744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.218875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.218906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.219095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.219128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.219229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.219260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.219396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.219428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.219599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.219632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.219830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.219863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.220055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.220085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.220318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.220349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.220543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.220574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.220702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.220734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.220974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.221006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.221139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.221172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.221350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.221411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.221614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.221646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.221830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.221860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.222032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.222063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.222266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.222298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.222495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.222529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.222705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.222737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.222853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.222885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.223097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.223133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.223251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.223285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.223396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.223428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.223642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.223673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.223807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.223838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.224012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.224045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.224219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.224250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.224439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.224471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.224718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.224750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.224883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.224913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.225154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.225186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.225372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.225404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.225535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.225568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.225692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.225723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.225998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.226031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.669 qpair failed and we were unable to recover it. 00:27:04.669 [2024-11-18 13:10:02.226137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.669 [2024-11-18 13:10:02.226168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.226344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.226383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.226558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.226591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.226707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.226739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.226917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.226948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.227082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.227115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.227286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.227319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.227476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.227549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.227775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.227812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.227935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.227968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.228105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.228138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.228272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.228303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.228560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.228595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.228723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.228755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.228989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.229022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.229150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.229183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.229310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.229343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.229483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.229517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.229639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.229671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.229846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.229879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.230139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.230171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.230379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.230412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.230534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.230566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.230692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.230724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.230910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.230942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.231116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.231154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.231261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.231292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.231544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.231577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.231821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.231854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.232056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.232088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.232279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.232311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.232427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.232460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.232640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.232671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.232851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.232883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.233076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.233108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.233373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.233407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.233537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.233570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.233679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.233711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.670 qpair failed and we were unable to recover it. 00:27:04.670 [2024-11-18 13:10:02.233835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.670 [2024-11-18 13:10:02.233868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.234066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.234098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.234344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.234390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.234514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.234546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.234671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.234703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.234834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.234868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.235055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.235086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.235215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.235247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.235418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.235451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.235627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.235659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.235761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.235794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.235973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.236006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.236177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.236209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.236476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.236508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.236642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.236678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.236791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.236823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.236950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.236980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.237185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.237218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.237404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.237437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.237547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.237578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.237751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.237782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.237973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.238005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.238214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.238245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.238447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.238480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.238665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.238698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.238816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.238847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.238968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.239000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.239247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.239286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.239488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.239522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.239652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.239684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.239803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.239834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.239933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.239966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.240146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.240178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.240395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.240427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.240625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.240656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.240842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.240873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.241079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.241111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.241231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.671 [2024-11-18 13:10:02.241263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.671 qpair failed and we were unable to recover it. 00:27:04.671 [2024-11-18 13:10:02.241441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.241474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.241665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.241697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.241864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.241895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.242079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.242112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.242317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.242348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.242470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.242501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.242782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.242814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.242933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.242965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.243184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.243216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.243331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.243373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.243566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.243597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.243721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.243753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.243873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.243906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.244085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.244116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.244367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.244400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.244520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.244552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.244680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.244713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.244833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.244865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.244988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.245021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.245135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.245167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.245405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.245438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.245546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.245579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.245762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.245793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.245921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.245953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.246142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.246173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.246433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.246466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.246648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.246681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.246799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.246831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.247030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.247063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.247177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.247214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.247409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.247442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.247572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.247604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.247723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.247755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.672 [2024-11-18 13:10:02.247945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.672 [2024-11-18 13:10:02.247978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.672 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.248243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.248275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.248458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.248491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.248633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.248664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.248784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.248815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.249024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.249056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.249228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.249259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.249454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.249487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.249731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.249763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.249873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.249905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.250014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.250047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.250234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.250267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.250534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.250567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.250776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.250808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.250992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.251024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.251134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.251166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.251334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.251373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.251550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.251582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.251772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.251804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.251977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.252009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.252201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.252235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.252430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.252462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.252636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.252668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.252887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.252920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.253024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.253055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.253177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.253210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.253317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.253349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.253478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.253510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.253640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.253672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.253856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.253888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.254025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.254057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.254228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.254260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.254455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.254489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.254662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.254693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.254818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.254851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.255026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.255059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.255256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.255293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.255447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.673 [2024-11-18 13:10:02.255481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.673 qpair failed and we were unable to recover it. 00:27:04.673 [2024-11-18 13:10:02.255694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.255725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.255832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.255864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.255990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.256022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.256155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.256186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.256392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.256425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.256551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.256584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.256722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.256754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.256955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.256987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.257171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.257203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.257335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.257377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.257612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.257644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.257846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.257879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.258004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.258036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.258211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.258243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.258442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.258475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.258586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.258618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.258833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.258865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.259043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.259075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.259200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.259232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.259431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.259464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.259654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.259686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.259824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.259856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.259982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.260015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.260146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.260177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.260406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.260438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.260608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.260681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.260986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.261057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.261196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.261233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.261456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.261493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.261704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.261737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.261870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.261903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.262081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.262114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.262287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.262319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.262519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.262555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.262726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.262758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.262859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.262892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.263066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.263098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.674 [2024-11-18 13:10:02.263286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.674 [2024-11-18 13:10:02.263318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.674 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.263431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.263473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.263650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.263682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.263819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.263853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.264089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.264121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.264256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.264288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.264466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.264499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.264673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.264707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.264923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.264955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.265071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.265103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.265224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.265256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.265444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.265477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.265703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.265735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.265926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.265959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.266181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.266214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.266341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.266387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.266571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.266603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.266708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.266741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.266984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.267015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.267126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.267159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.267398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.267432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.267569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.267601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.267808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.267841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.268084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.268116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.268247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.268279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.268455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.268489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.268609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.268643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.268830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.268862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.269132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.269165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.269281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.269314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.269454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.269488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.269682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.269712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.269830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.269860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.269971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.270002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.270196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.270226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.270411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.270444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.270637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.270670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.270919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.270952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.675 [2024-11-18 13:10:02.271154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.675 [2024-11-18 13:10:02.271186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.675 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.271317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.271350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.271477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.271509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.271699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.271738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.271999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.272031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.272207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.272240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.272441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.272474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.272661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.272694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.272947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.272981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.273169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.273200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.273317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.273350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.273551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.273583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.273768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.273801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.273986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.274018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.274192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.274224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.274411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.274446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.274569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.274601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.274789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.274822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.275072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.275103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.275278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.275311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.275529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.275562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.275679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.275711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.275832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.275862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.275993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.276025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.276141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.276174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.276291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.276324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.276511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.276545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.276798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.276831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.277028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.277061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.277162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.277195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.277404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.277439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.277558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.277591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.676 qpair failed and we were unable to recover it. 00:27:04.676 [2024-11-18 13:10:02.277850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.676 [2024-11-18 13:10:02.277884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.278129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.278161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.278346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.278390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.278559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.278596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.278725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.278759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.278975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.279008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.279209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.279241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.279426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.279460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.279581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.279613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.279888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.279921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.280050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.280083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.280201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.280234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.280373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.280408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.280584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.280616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.280801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.280833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.281023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.281055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.281295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.281328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.281475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.281508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.281687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.281719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.281901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.281933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.282177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.282210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.282330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.282380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.282556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.282588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.282710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.282742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.282943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.282975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.283171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.283205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.283308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.283340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.283489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.283523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.283707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.283739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.283950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.283983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.284221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.284254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.284374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.284409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.284701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.284734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.284879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.284913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.285038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.285070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.285201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.285233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.285448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.285480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.285611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.677 [2024-11-18 13:10:02.285645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.677 qpair failed and we were unable to recover it. 00:27:04.677 [2024-11-18 13:10:02.285835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.285872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.285982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.286015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.286188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.286221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.286362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.286398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.286640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.286673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.286853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.286886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.287004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.287036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.287222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.287254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.287499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.287533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.287773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.287805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.287927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.287959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.288160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.288193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.288381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.288416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.288600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.288632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.288740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.288774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.289015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.289048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.289308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.289340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.289548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.289582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.289763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.289796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.289980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.290014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.290254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.290286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.290391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.290424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.290600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.290632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.290815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.290848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.291061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.291093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.291278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.291310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.291529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.291562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.291684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.291716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.291892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.291923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.292098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.292129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.292402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.292438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.292622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.292654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.292833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.292865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.293053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.293085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.293210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.293242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.293371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.293405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.293524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.293557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.293733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.678 [2024-11-18 13:10:02.293764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.678 qpair failed and we were unable to recover it. 00:27:04.678 [2024-11-18 13:10:02.293977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.294011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.294118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.294150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.294333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.294393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.294599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.294633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.294832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.294864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.294985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.295017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.295221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.295253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.295395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.295429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.295613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.295645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.295761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.295793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.296033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.296065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.296279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.296312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.296427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.296460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.296632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.296665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.296847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.296880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.296997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.297028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.297237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.297270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.297399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.297432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.297558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.297592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.297782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.297813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.297933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.297965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.298146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.298178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.298446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.298480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.298674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.298708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.298825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.298857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.299039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.299072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.299248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.299281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.299420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.299454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.299561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.299593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.299726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.299759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.299946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.299979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.300103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.300134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.300326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.300369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.300541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.300572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.300839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.300872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.301006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.301038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.301164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.301196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.301404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.679 [2024-11-18 13:10:02.301437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.679 qpair failed and we were unable to recover it. 00:27:04.679 [2024-11-18 13:10:02.301613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.301647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.301883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.301916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.302180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.302213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.302329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.302373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.302491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.302528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.302712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.302744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.302866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.302898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.303074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.303106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.303306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.303338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.303470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.303501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.303678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.303712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.303826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.303858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.304032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.304064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.304237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.304269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.304444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.304478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.304676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.304709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.304826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.304857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.305078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.305110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.305285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.305317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.305518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.305554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.305795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.305826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.305949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.305982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.306174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.306208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.306314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.306345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.306600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.306633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.306822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.306853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.307066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.307098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.307272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.307304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.307483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.307517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.307755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.307787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.307991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.308024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.308223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.308256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.308438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.308472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.308645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.308678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.308936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.308967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.309146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.309179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.309381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.309415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.309539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.680 [2024-11-18 13:10:02.309572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.680 qpair failed and we were unable to recover it. 00:27:04.680 [2024-11-18 13:10:02.309681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.309712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.309907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.309940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.310111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.310143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.310275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.310306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.310454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.310487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.310605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.310637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.310824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.310861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.310968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.311000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.311108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.311140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.311332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.311372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.311576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.311610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.311742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.311776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.311882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.311912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.312085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.312118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.312369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.312403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.312582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.312614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.312814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.312847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.313024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.313057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.313183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.313215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.313455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.313489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.313666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.313700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.313820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.313853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.314042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.314074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.314250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.314282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.314461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.314495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.314620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.314653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.314770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.314801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.314982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.315014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.315190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.315223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.315401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.315433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.315599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.315631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.315774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.315807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.315941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.315974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.681 qpair failed and we were unable to recover it. 00:27:04.681 [2024-11-18 13:10:02.316170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-11-18 13:10:02.316203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.316325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.316363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.316518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.316550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.316670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.316703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.316882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.316915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.317026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.317058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.317240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.317272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.317513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.317547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.317788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.317822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.318052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.318085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.318197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.318230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.318350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.318392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.318497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.318530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.318651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.318693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.318842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.318875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.319057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.319089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.319220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.319252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.319429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.319463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.319646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.319679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.319890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.319922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.320098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.320131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.320305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.320338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.320540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.320572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.320694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.320727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.320964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.320998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.321128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.321160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.321384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.321418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.321602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.321636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.321758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.321790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.321967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.322001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.682 [2024-11-18 13:10:02.322207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-11-18 13:10:02.322240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.682 qpair failed and we were unable to recover it. 00:27:04.963 [2024-11-18 13:10:02.322506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.963 [2024-11-18 13:10:02.322540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.322676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.322708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.322904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.322937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.323051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.323081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.323216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.323247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.323375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.323409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.323654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.323687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.323811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.323842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.324015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.324046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.324223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.324255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.324379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.324411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.324538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.324570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.324676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.324709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.324841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.324872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.325053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.325085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.325277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.325310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.325495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.325527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.325667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.325699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.325828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.325859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.326033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.326065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.326181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.326213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.326408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.326440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.326611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.326648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.326786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.326818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.327060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.327093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.327220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.327251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.327372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.327405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.327592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.327624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.327743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.327775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.327901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.327933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.964 [2024-11-18 13:10:02.328141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.964 [2024-11-18 13:10:02.328172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.964 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.328285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.328315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.328442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.328474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.328688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.328720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.328890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.328921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.329124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.329157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.329441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.329474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.329588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.329620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.329796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.329828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.329956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.329987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.330109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.330141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.330446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.330481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.330604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.330636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.330878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.330910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.331033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.331065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.331198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.331229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.331361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.331395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.331517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.331548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.331733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.331764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.331886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.331920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.332112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.332145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.332322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.332363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.332555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.332588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.332758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.332788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.332910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.332941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.333059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.333092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.333286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.333317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.333497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.333530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.333705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.333736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.333985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.334019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.965 [2024-11-18 13:10:02.334195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.965 [2024-11-18 13:10:02.334226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.965 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.334416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.334450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.334585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.334621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.334863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.334897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.335012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.335043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.335244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.335277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.335466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.335499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.335710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.335741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.335924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.335956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.336146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.336178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.336301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.336333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.336550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.336583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.336843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.336875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.337152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.337184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.337366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.337399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.337575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.337608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.337741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.337774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.337887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.337918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.338088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.338121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.338248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.338279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.338448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.338480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.338735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.338767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.338893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.338924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.339092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.339124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.339326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.339365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.339486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.339517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.339728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.339760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.339890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.339921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.340132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.340164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.340382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.340416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.340606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.340638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.340873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.340904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.341110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.341142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.966 [2024-11-18 13:10:02.341273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.966 [2024-11-18 13:10:02.341303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.966 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.341421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.341454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.341570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.341602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.341735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.341767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.341941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.341973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.342147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.342177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.342437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.342470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.342661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.342691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.342819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.342850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.342980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.343019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.343198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.343229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.343491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.343523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.343785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.343817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.344006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.344038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.344300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.344332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.344543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.344575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.344692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.344722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.344984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.345015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.345183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.345214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.345404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.345435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.345717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.345749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.345951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.345983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.346153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.346184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.346317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.346350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.346536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.346567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.346804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.346836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.347019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.347049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.347231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.347264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.347444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.967 [2024-11-18 13:10:02.347477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.967 qpair failed and we were unable to recover it. 00:27:04.967 [2024-11-18 13:10:02.347680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.347711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.347899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.347929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.348171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.348202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.348378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.348409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.348704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.348735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.348922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.348954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.349164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.349195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.349406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.349441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.349563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.349595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.349762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.349794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.349962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.349993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.350256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.350288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.350456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.350489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.350675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.350705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.350882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.350914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.351107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.351138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.351265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.351296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.351433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.351466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.351583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.351615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.351731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.351762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.351998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.352036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.352242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.352272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.352474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.352507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.352717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.352748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.352956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.352988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.353128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.353159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.353287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.353319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.353456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.353487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.353612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.353644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.353814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.353844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.354012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.354042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.354154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.354184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.968 [2024-11-18 13:10:02.354420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.968 [2024-11-18 13:10:02.354452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.968 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.354687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.354719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.354963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.354996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.355187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.355219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.355407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.355438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.355698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.355730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.355851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.355882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.356146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.356177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.356317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.356350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.356485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.356517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.356684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.356715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.356910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.356940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.357129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.357162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.357446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.357479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.357667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.357699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.357877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.357909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.358099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.358131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.358306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.358338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.358608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.358641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.358901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.358933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.359122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.359153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.359444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.359476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.359646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.359678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.359927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.359959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.360092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.360122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.360393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.360427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.360692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.360724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.360897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.360929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.361189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.361231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.969 [2024-11-18 13:10:02.361406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.969 [2024-11-18 13:10:02.361439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.969 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.361558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.361588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.361763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.361794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.362033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.362065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.362331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.362371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.362574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.362605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.362712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.362744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.362981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.363011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.363222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.363255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.363378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.363409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.363544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.363575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.363813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.363845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.364017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.364047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.364186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.364218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.364483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.364514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.364707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.364737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.364873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.364905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.365033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.365064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.365261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.365293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.365476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.365507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.365630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.365661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.365769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.365801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.365992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.366024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.366204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.366234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.366496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.366531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.366706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.366736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.366965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.367037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.367252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.367289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.367486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.367521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.367734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.367767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.367972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.368005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.368244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.368277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.368485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.368520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.368718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.970 [2024-11-18 13:10:02.368751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.970 qpair failed and we were unable to recover it. 00:27:04.970 [2024-11-18 13:10:02.368993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.369025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.369223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.369256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.369440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.369474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.369586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.369618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.369876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.369909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.370084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.370117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.370300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.370334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.370468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.370501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.370623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.370655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.370922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.370954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.371078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.371111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.371232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.371263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.371522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.371555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.371681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.371713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.371899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.371931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.372102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.372134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.372309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.372342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.372490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.372522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.372762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.372794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.372911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.372953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.373061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.373091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.373214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.373246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.373368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.373402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.373638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.373671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.373857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.373889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.374072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.374103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.374295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.374327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.374530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.374563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.374733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.374764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.374889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.374921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.375042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.375073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.375331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.375374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.375544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.375576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.375760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.971 [2024-11-18 13:10:02.375793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.971 qpair failed and we were unable to recover it. 00:27:04.971 [2024-11-18 13:10:02.376053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.376084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.376293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.376325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.376453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.376486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.376673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.376705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.376879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.376910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.377078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.377111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.377301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.377332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.377476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.377509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.377745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.377777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.377947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.377978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.378100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.378132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.378238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.378271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.378391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.378424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.378627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.378661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.378846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.378879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.378998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.379031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.379149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.379181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.379444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.379476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.379659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.379692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.379937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.379968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.380230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.380262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.380443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.380477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.380688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.380720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.380959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.380992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.381183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.381215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.381326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.381369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.381546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.381579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.381704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.381736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.381867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.381899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.382091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.382123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.382306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.382338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.382609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.382642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.972 [2024-11-18 13:10:02.382770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.972 [2024-11-18 13:10:02.382802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.972 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.382919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.382951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.383142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.383174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.383372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.383406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.383589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.383622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.383743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.383775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.383952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.383984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.384190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.384221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.384334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.384377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.384617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.384649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.384759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.384791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.385000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.385032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.385218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.385250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.385425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.385459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.385584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.385615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.385734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.385767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.385948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.385980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.386094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.386126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.386299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.386331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.386474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.386508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.386688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.386720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.386826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.386864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.386994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.387026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.387262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.387294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.387478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.387512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.387773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.387806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.387977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.388007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.388126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.388157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.388334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.388398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.388533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.388565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.388760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.388792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.388975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.389007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.389185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.973 [2024-11-18 13:10:02.389215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.973 qpair failed and we were unable to recover it. 00:27:04.973 [2024-11-18 13:10:02.389420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.389455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.389576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.389607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.389825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.389858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.390068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.390101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.390309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.390341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.390591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.390624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.390870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.390903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.391082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.391114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.391292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.391323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.391616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.391649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.391860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.391892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.392100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.392131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.392266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.392297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.392441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.392474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.392577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.392609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.392783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.392815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.393005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.393038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.393227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.393259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.393429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.393464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.393677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.393709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.393904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.393934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.394170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.394201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.394379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.394412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.394540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.394573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.394743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.394773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.394956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.974 [2024-11-18 13:10:02.394987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.974 qpair failed and we were unable to recover it. 00:27:04.974 [2024-11-18 13:10:02.395169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.395200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.395324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.395374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.395490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.395522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.395736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.395773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.395955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.395987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.396111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.396143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.396275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.396305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.396450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.396483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.396719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.396751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.396857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.396890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.397003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.397034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.397231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.397264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.397440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.397476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.397721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.397754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.398017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.398048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.398219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.398250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.398436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.398470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.398652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.398685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.398872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.398904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.399120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.399151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.399347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.399385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.399650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.399683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.399802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.399835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.399952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.399983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.400104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.400136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.400343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.400405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.400519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.400551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.400733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.400765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.400904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.400935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.401111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.401143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.401324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.401374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.401565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.401599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.401703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.401735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.401922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.975 [2024-11-18 13:10:02.401953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.975 qpair failed and we were unable to recover it. 00:27:04.975 [2024-11-18 13:10:02.402215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.402247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.402381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.402414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.402616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.402649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.402837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.402870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.403107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.403139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.403328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.403368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.403545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.403577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.403692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.403723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.403907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.403940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.404175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.404207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.404421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.404456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.404649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.404681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.404791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.404821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.405000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.405033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.405321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.405364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.405501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.405533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.405774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.405806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.406046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.406077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.406245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.406278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.406406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.406441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.406625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.406658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.406847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.406879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.407072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.407103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.407310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.407341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.407491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.407523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.407704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.407736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.407911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.407944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.408118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.408149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.408265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.408297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.408429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.408463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.408634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.408664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.976 [2024-11-18 13:10:02.408857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.976 [2024-11-18 13:10:02.408887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.976 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.409000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.409032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.409149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.409179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.409294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.409327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.409578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.409611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.409740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.409772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.409882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.409919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.410088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.410121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.410385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.410419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.410591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.410623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.410885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.410917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.411022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.411054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.411189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.411220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.411390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.411422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.411616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.411649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.411764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.411795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.412034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.412067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.412333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.412387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.412640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.412672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.412792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.412823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.413019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.413051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.413164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.413193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.413371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.413406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.413522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.413554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.413672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.413703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.413884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.413915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.414033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.414065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.414190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.414221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.414485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.414517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.414633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.414664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.414833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.414866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.415087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.415119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.415245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.415276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.415402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.977 [2024-11-18 13:10:02.415441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.977 qpair failed and we were unable to recover it. 00:27:04.977 [2024-11-18 13:10:02.415553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.415587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.415753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.415784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.415957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.415988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.416112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.416144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.416347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.416390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.416503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.416535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.416725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.416757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.417017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.417050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.417309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.417342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.417478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.417510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.417745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.417778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.418036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.418068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.418199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.418231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.418378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.418412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.418587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.418619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.418747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.418779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.418898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.418929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.419100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.419132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.419310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.419343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.419550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.419583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.419703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.419735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.419847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.419879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.420051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.420084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.420280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.420313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.420439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.420472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.420658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.420690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.420808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.420840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.421063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.421097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.421223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.421254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.421515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.421550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.421812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.421843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.422048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.422082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.978 [2024-11-18 13:10:02.422266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.978 [2024-11-18 13:10:02.422299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.978 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.422478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.422512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.422634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.422666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.422905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.422938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.423148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.423181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.423299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.423330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.423457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.423487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.423658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.423691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.423874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.423921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.424039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.424072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.424192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.424224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.424490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.424523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.424646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.424676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.424788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.424819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.424994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.425026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.425169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.425202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.425389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.425421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.425593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.425625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.425831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.425864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.425984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.426016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.426118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.426150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.426340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.426381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.426652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.426685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.426921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.426953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.427054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.427087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.427269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.427301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.427503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.427535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.427637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.427669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.427807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.427838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.428070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.428102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.428289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.428320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.428586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.428620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.428805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.428836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.979 qpair failed and we were unable to recover it. 00:27:04.979 [2024-11-18 13:10:02.429035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.979 [2024-11-18 13:10:02.429068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.429307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.429339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.429544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.429583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.429841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.429874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.430145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.430178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.430373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.430408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.430649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.430681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.430797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.430829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.430957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.430989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.431163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.431194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.431376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.431420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.431558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.431590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.431722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.431754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.431943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.431975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.432232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.432264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.432397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.432431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.432544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.432578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.432769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.432800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.433042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.433075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.433250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.433282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.433533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.433568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.433693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.433724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.433987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.434019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.434123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.434155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.434370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.434404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.980 [2024-11-18 13:10:02.434541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.980 [2024-11-18 13:10:02.434573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.980 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.434770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.434801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.434973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.435004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.435268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.435301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.435427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.435460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.435752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.435785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.435987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.436019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.436232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.436264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.436379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.436414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.436590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.436622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.436861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.436894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.437022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.437054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.437263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.437295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.437427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.437460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.437699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.437731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.437971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.438003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.438140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.438172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.438364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.438399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.438600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.438638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.438828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.438861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.438998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.439032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.439208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.439240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.439412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.439444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.439624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.439656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.439771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.439803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.439995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.440027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.440146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.440179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.440421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.440455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.440715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.440748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.440883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.440915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.441100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.981 [2024-11-18 13:10:02.441132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.981 qpair failed and we were unable to recover it. 00:27:04.981 [2024-11-18 13:10:02.441305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.441337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.441594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.441628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.441760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.441792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.441979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.442011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.442127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.442160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.442336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.442376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.442561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.442594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.442855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.442887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.443127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.443160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.443403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.443437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.443572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.443604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.443796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.443828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.444007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.444040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.444153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.444185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.444289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.444322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.444512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.444547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.444730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.444761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.445032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.445065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.445314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.445346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.445620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.445652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.445833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.445865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.446048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.446080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.446278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.446311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.446506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.446539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.446658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.446691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.446886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.446918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.447178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.447210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.447384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.447419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.447618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.447652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.447841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.447874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.447992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.448026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.448141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.448173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.448277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.448308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.448445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.448478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.982 qpair failed and we were unable to recover it. 00:27:04.982 [2024-11-18 13:10:02.448646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.982 [2024-11-18 13:10:02.448678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.448896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.448929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.449035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.449066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.449194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.449226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.449414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.449447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.449576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.449609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.449723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.449754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.449942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.449973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.450184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.450216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.450456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.450489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.450700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.450732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.450978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.451011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.451198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.451230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.451375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.451408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.451513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.451545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.451723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.451755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.451956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.451988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.452173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.452205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.452323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.452364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.452491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.452523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.452659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.452690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.452946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.452983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.453250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.453283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.453466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.453498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.453635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.453667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.453909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.453942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.454125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.454158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.454284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.454318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.454501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.454535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.454784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.454817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.454949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.454981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.455252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.455283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.455500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.983 [2024-11-18 13:10:02.455533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.983 qpair failed and we were unable to recover it. 00:27:04.983 [2024-11-18 13:10:02.455723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.455754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.455938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.455969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.456213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.456245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.456395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.456429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.456535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.456567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.456667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.456698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.456822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.456854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.456975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.457007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.457267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.457300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.457410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.457444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.457683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.457715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.457993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.458026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.458307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.458340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.458481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.458513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.458685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.458717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.458918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.458950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.459130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.459162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.459423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.459456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.459598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.459630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.459812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.459845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.460033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.460066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.460239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.460272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.460393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.460426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.460610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.460642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.460748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.460781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.460902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.460934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.461056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.461089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.461380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.461413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.461585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.461618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.461865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.461899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.462142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.462175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.462370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.462404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.984 [2024-11-18 13:10:02.462646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-11-18 13:10:02.462679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.984 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.462863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.462896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.463079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.463111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.463303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.463335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.463603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.463638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.463808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.463839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.463978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.464010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.464147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.464180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.464396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.464429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.464603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.464636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.464810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.464841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.465041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.465072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.465316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.465348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.465465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.465496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.465683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.465717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.465959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.465992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.466173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.466206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.466311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.466345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.466620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.466652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.466844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.466876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.467005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.467036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.467156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.467188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.467387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.467420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.467635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.467667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.467796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.467833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.468017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-11-18 13:10:02.468051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.985 qpair failed and we were unable to recover it. 00:27:04.985 [2024-11-18 13:10:02.468227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.468259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.468439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.468472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.468601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.468633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.468872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.468905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.469114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.469146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.469321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.469362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.469484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.469517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.469702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.469735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.469912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.469945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.470133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.470165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.470273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.470306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.470527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.470562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.470806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.470839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.471015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.471047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.471316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.471349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.471544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.471578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.471771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.471803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.471914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.471946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.472191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.472224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.472401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.472433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.472562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.472594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.472783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.472815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.473085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.473118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.473223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.473256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.473448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.473481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.473599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.473631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.473815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.473849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.474042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.474074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.474201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.474233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.474368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.474400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.474570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.474603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.474865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.474898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.475023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.475057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.986 [2024-11-18 13:10:02.475240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-11-18 13:10:02.475273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.986 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.475467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.475500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.475790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.475823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.475998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.476030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.476217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.476249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.476393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.476428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.476638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.476677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.476807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.476840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.477011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.477042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.477167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.477200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.477407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.477442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.477611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.477643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.477774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.477806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.478075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.478108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.478298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.478331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.478447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.478481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.478665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.478698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.478907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.478939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.479210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.479243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.479430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.479462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.479653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.479685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.479865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.479896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.480076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.480109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.480367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.480402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.480600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.480633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.480820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.480853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.480986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.481018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.481256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.481288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.481402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.481435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.481637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.481670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.481806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.987 [2024-11-18 13:10:02.481839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.987 qpair failed and we were unable to recover it. 00:27:04.987 [2024-11-18 13:10:02.482027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.482060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.482251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.482283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.482525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.482564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.482749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.482781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.482979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.483010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.483197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.483230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.483408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.483441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.483572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.483604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.483785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.483816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.483988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.484019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.484260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.484292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.484436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.484469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.484595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.484627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.484836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.484867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.485065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.485097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.485214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.485246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.485517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.485553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.485817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.485848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.486027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.486060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.486244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.486277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.486453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.486510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.486701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.486735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.486974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.487007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.487196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.487227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.487339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.487380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.487557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.487591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.487780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.487812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.487987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.488019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.488138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.488170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.988 [2024-11-18 13:10:02.488298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.988 [2024-11-18 13:10:02.488332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.988 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.488558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.488592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.488767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.488800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.488988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.489020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.489140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.489171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.489402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.489435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.489551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.489583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.489757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.489788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.489984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.490015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.490206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.490237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.490368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.490402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.490576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.490609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.490806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.490837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.491038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.491070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.491311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.491347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.491467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.491498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.491708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.491741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.491981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.492014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.492136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.492168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.492399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.492433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.492642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.492676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.492867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.492899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.493017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.493048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.493231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.493263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.493443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.493477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.493653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.493686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.493946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.493978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.494111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.494143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.494325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.494364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.494607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.494640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.494933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.989 [2024-11-18 13:10:02.494965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.989 qpair failed and we were unable to recover it. 00:27:04.989 [2024-11-18 13:10:02.495073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.495105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.495212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.495243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.495371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.495404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.495587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.495619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.495835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.495867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.496054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.496088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.496261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.496294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.496425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.496457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.496719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.496752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.496940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.496973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.497146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.497184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.497450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.497484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.497673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.497705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.497886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.497918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.498107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.498140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.498325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.498365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.498492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.498523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.498717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.498749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.498934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.498967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.499094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.499126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.499340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.499381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.499647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.499679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.499789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.499821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.499932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.499965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.500133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.500206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.500405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.500442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.500572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.500605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.500839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.500872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.501138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.501170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.501410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.501443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.501636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.501668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.501858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.501889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.502150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.502182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.502396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.502429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.990 qpair failed and we were unable to recover it. 00:27:04.990 [2024-11-18 13:10:02.502552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.990 [2024-11-18 13:10:02.502584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.502841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.502873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.502997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.503030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.503208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.503249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.503424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.503459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.503659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.503691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.503981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.504014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.504190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.504223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.504416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.504451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.504642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.504674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.504850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.504882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.505005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.505037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.505218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.505250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.505443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.505478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.505590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.505622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.505813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.505845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.506036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.506068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.506329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.506371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.506559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.506591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.506705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.506737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.506929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.506962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.507200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.507232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.507418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.507452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.507560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.507594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.507771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.507804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.507926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.991 [2024-11-18 13:10:02.507958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.991 qpair failed and we were unable to recover it. 00:27:04.991 [2024-11-18 13:10:02.508132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.508164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.508367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.508401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.508572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.508604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.508715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.508747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.509056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.509129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.509422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.509460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.509592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.509626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.509803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.509836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.509962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.509994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.510230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.510261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.510384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.510416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.510556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.510590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.510784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.510815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.511004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.511036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.511204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.511236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.511371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.511406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.511537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.511569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.511695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.511735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.511941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.511973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.512176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.512208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.512338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.512381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.512576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.512607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.512851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.512882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.513057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.513088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.513275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.513306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.513448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.513481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.513662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.513694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.513906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.513938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.514111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.514142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.514310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.514341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.992 [2024-11-18 13:10:02.514547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.992 [2024-11-18 13:10:02.514581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.992 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.514721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.514754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.514922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.514954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.515219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.515251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.515433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.515468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.515653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.515685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.515867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.515898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.516109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.516141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.516313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.516345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.516551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.516584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.516753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.516785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.516903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.516934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.517128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.517162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.517422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.517455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.517651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.517683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.517803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.517835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.518030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.518062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.518188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.518219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.518334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.518377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.518591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.518623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.518889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.518922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.519035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.519066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.519328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.519370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.519556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.519587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.519755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.519786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.519974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.520006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.520245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.520277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.520468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.520508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.520727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.520759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.520975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.521008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.521194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.521228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.521432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.521465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.521651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.993 [2024-11-18 13:10:02.521683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.993 qpair failed and we were unable to recover it. 00:27:04.993 [2024-11-18 13:10:02.521788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.521820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.521991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.522024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.522153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.522184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.522302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.522334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.522585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.522618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.522862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.522895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.523130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.523162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.523386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.523419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.523619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.523652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.523858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.523891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.524074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.524106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.524297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.524329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.524532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.524564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.524698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.524729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.524918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.524949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.525119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.525152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.525394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.525428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.525575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.525608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.525798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.525830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.526074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.526106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.526227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.526258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.526411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.526445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.526635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.526668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.526802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.526834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.527014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.527046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.527229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.527260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.527468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.527502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.527622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.527654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.527918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.527950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.528135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.528167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.528287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.528319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.528463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.528495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.994 qpair failed and we were unable to recover it. 00:27:04.994 [2024-11-18 13:10:02.528666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.994 [2024-11-18 13:10:02.528699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.528832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.528863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.529044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.529081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.529250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.529282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.529394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.529427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.529573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.529604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.529802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.529834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.530098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.530131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.530313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.530344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.530482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.530516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.530778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.530810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.531030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.531062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.531194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.531226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.531409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.531441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.531684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.531715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.531959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.531990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.532267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.532301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.532498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.532530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.532709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.532739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.533008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.533039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.533159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.533190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.533387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.533420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.533643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.533676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.533856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.533887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.533996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.534028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.534145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.534179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.534376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.534410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.534617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.534648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.534765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.534795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.534974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.535011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.535122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.535153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.535344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.535385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.535508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.535539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.535665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.995 [2024-11-18 13:10:02.535698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.995 qpair failed and we were unable to recover it. 00:27:04.995 [2024-11-18 13:10:02.535830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.535861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.536053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.536085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.536284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.536316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.536535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.536568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.536783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.536816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.536942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.536974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.537096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.537128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.537311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.537343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.537556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.537588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.537852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.537884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.538129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.538161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.538374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.538409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.538551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.538583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.538687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.538719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.538956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.538990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.539121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.539152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.539275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.539307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.539513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.539546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.539732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.539765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.539893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.539923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.540112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.540144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.540273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.540305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.540436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.540468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.540643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.540675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.996 [2024-11-18 13:10:02.540847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.996 [2024-11-18 13:10:02.540878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.996 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.541050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.541083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.541257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.541289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.541437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.541468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.541597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.541628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.541747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.541779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.541975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.542007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.542133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.542165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.542409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.542443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.542571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.542603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.542720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.542751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.542950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.542987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.543180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.543213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.543340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.543382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.543604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.543637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.543829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.543861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.544001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.544032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.544215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.544246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.544499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.544532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.544643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.544675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.544871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.544904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.545143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.545175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.545302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.545333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.545528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.545560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.545765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.545798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.545928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.545961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.546090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.546121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.546226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.546257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.546386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.546420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.546592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.546624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.546826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.546857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.997 [2024-11-18 13:10:02.546976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.997 [2024-11-18 13:10:02.547008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.997 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.547137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.547168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.547296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.547327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.547455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.547489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.547663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.547694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.547886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.547918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.548043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.548074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.548183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.548213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.548418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.548450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.548644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.548675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.548861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.548893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.549074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.549107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.549371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.549405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.549542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.549575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.549754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.549786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.549982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.550013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.550119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.550150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.550366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.550400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.550591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.550624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.550735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.550766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.550937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.550975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.551102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.551133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.551334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.551379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.551520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.551552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.551744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.551775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.551905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.551937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.552048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.552079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.552340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.552385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.552502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.552534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.552720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.552752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.552875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.552908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.553018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.553050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.553178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.553208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.553338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.553400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.553618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.553651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.553777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-11-18 13:10:02.553809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.998 qpair failed and we were unable to recover it. 00:27:04.998 [2024-11-18 13:10:02.554001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.554033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.554220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.554251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.554427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.554459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.554583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.554617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.554793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.554825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.555067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.555099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.555307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.555341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.555471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.555504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.555745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.555776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.555963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.555995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.556186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.556218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.556414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.556447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.556561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.556593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.556770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.556801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.557052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.557085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.557270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.557301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.557435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.557469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.557581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.557612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.557780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.557811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.557990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.558022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.558142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.558174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.558285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.558315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.558454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.558488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.558605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.558635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.558811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.558850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.559053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.559085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.559210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.559243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.559410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.559443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.559582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.559615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.559743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.559775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.559957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.559988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.560091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.560123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.560257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.560289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.560429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-11-18 13:10:02.560461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:04.999 qpair failed and we were unable to recover it. 00:27:04.999 [2024-11-18 13:10:02.560597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.560629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.560802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.560836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.561008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.561040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.561152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.561183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.561399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.561433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.561566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.561598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.561730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.561761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.561943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.561975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.562150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.562182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.562399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.562431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.562628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.562661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.562837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.562869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.562976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.563008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.563270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.563302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.563574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.563608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.563784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.563816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.563989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.564021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.564140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.564171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.564295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.564328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.564462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.564493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.564707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.564739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.564871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.564901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.565029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.565062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.565318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.565363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.565589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.565621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.565738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.565769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.565941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.565972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.566170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.566202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.566325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.566370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.566566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.566599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.000 [2024-11-18 13:10:02.566720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-11-18 13:10:02.566757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.000 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.566938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.566970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.567143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.567175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.567413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.567446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.567645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.567678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.567943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.567975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.568218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.568250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.568455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.568489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.568693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.568725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.568830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.568861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.569035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.569066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.569251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.569283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.569400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.569433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.569692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.569725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.569909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.569940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.570202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.570235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.570414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.570447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.570566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.570597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.570770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.570803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.570978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.571010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.571145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.571176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.571305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.571337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.571458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.571491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.571661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.571692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.571886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.571917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.572024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.572056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.572337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.572381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.572563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.572596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.572777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.572809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.573020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.573053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.573176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.573208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.573335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.573394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.573511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.573543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.573714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.573745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.573871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.573902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.001 [2024-11-18 13:10:02.574158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.001 [2024-11-18 13:10:02.574191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.001 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.574372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.574406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.574620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.574652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.574837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.574868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.575058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.575091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.575328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.575379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.575506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.575537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.575708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.575740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.575864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.575896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.576070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.576102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.576300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.576331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.576529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.576561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.576756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.576788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.576981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.577014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.577135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.577168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.577408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.577443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.577618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.577649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.577864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.577896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.578085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.578117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.578371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.578404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.578668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.578701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.578909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.578940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.579133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.579165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.579336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.579375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.579506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.579539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.002 [2024-11-18 13:10:02.579664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.002 [2024-11-18 13:10:02.579695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.002 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.579865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.579896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.580148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.580181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.580287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.580319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.580458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.580490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.580612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.580643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.580835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.580868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.581152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.581184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.581309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.581342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.581482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.581513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.581712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.581744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.581947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.581979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.582173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.582207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.582396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.582429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.582547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.582578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.582697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.582730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.582855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.582887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.582997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.583031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.583143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.583174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.583415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.583447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.583571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.583608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.583813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.583845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.584032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.584065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.584301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.584333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.584460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.584492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.584680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.584711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.584846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.584878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.585010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.585042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.585214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.585245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.585390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.585422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.585658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.585690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.585942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.585974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.003 qpair failed and we were unable to recover it. 00:27:05.003 [2024-11-18 13:10:02.586168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.003 [2024-11-18 13:10:02.586202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.586381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.586413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.586606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.586639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.586743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.586774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.586900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.586931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.587192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.587226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.587332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.587375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.587574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.587607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.587818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.587851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.587981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.588013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.588134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.588165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.588279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.588313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.588588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.588621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.588736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.588768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.588882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.588914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.589050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.589083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.589270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.589303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.589489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.589521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.589644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.589674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.589796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.589829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.590001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.590033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.590211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.590243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.590514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.590547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.590672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.590703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.590879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.590911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.591173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.591206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.591412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.591445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.591748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.591781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.591908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.591946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.592072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.592104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.592237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.592268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.592398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.004 [2024-11-18 13:10:02.592432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.004 qpair failed and we were unable to recover it. 00:27:05.004 [2024-11-18 13:10:02.592555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.592585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.592762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.592794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.593060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.593092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.593307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.593340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.593470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.593503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.593676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.593709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.593824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.593857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.594043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.594075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.594185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.594217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.594392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.594424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.594623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.594654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.594782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.594814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.594984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.595016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.595281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.595313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.595441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.595473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.595676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.595708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.595893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.595923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.596037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.596069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.596305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.596337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.596460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.596491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.596738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.596770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.597008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.597040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.597244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.597275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.597448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.597482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.597701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.597734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.597903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.597936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.598042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.598073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.598362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.598395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.598537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.598569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.598699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.598730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.598844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.598876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.599144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.599176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.599305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.599337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.005 qpair failed and we were unable to recover it. 00:27:05.005 [2024-11-18 13:10:02.599463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.005 [2024-11-18 13:10:02.599494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.599621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.599652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.599774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.599805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.599934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.599971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.600084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.600116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.600303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.600334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.600543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.600574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.600766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.600796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.601042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.601074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.601204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.601235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.601424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.601457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.601649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.601680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.601794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.601825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.602063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.602095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.602214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.602246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.602425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.602459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.602648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.602679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.602857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.602889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.603010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.603041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.603304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.603336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.603567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.603599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.603723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.603754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.603948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.603981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.604245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.604277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.604466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.604499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.604633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.604663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.604863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.604894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.605017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.605050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.605245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.605277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.605504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.605538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.605731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.605763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.605940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.605970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.606248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.606279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.606415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.606447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.606715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.606747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.606850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.606881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.607141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.607172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.006 qpair failed and we were unable to recover it. 00:27:05.006 [2024-11-18 13:10:02.607295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.006 [2024-11-18 13:10:02.607326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.607510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.607542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.607723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.607753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.608025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.608057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.608193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.608225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.608408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.608441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.608547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.608585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.608767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.608798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.608999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.609030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.609148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.609180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.609373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.609405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.609611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.609643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.609755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.609785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.609895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.609927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.610102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.610132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.610386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.610419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.610615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.610646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.610778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.610808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.610977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.611009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.611190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.611223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.611442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.611476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.611720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.611751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.611882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.611913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.612082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.612112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.612296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.612328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.612451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.612483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.612671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.612702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.612888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.612918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.007 [2024-11-18 13:10:02.613038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.007 [2024-11-18 13:10:02.613069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.007 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.613261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.613292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.613508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.613540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.613808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.613840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.614014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.614046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.614254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.614287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.614413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.614444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.614579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.614611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.614721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.614752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.614882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.614911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.615031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.615064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.615253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.615283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.615394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.615426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.615617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.615649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.615903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.615935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.616049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.616081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.616209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.616240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.616431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.616463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.616655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.616692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.616884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.616916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.617042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.617074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.617269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.617300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.617444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.617476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.617593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.617625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.617740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.617770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.617882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.617914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.618111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.618143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.618342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.618400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.618579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.618610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.618794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.618824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.618997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.619027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.619149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.619180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.619430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.619464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.619707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.008 [2024-11-18 13:10:02.619739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.008 qpair failed and we were unable to recover it. 00:27:05.008 [2024-11-18 13:10:02.619912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.619943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.620046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.620076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.620263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.620295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.620482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.620514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.620623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.620655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.620765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.620796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.620911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.620943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.621119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.621151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.621336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.621375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.621561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.621591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.621773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.621803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.622056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.622089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.622287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.622318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.622503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.622536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.622716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.622748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.622932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.622963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.623145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.623176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.623367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.623399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.623588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.623619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.623884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.623916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.624036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.624068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.624239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.624271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.624461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.624493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.624663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.624694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.624879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.624916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.625035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.625067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.625319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.625359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.625543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.625574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.625712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.625743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.625937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.625968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.626141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.626173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.626414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.626448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.626577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.626609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.626726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.626759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.009 qpair failed and we were unable to recover it. 00:27:05.009 [2024-11-18 13:10:02.626999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.009 [2024-11-18 13:10:02.627030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.627293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.627325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.627454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.627486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.627676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.627708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.627833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.627866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.628048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.628078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.628267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.628297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.628480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.628513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.628728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.628760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.628933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.628964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.629152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.629183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.629365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.629399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.629511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.629542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.629710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.629741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.629930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.629962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.630088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.630121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.630262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.630292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.630434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.630467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.630587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.630617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.630743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.630775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.630895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.630927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.631043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.631074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.631260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.631293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.631542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.631575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.631709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.631740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.631915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.631946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.632140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.632172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.632349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.632393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.632511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.632542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.632657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.632688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.632902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.632940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.633048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.633078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.633318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.633349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.633469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.633500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.633750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.633781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.634070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.634102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.010 [2024-11-18 13:10:02.634218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.010 [2024-11-18 13:10:02.634250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.010 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.634399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.634431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.634555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.634586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.634710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.634742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.634930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.634962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.635148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.635178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.635282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.635313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.635441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.635474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.635612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.635644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.635910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.635942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.636183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.636215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.636342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.636383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.636619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.636651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.636834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.636866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.636984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.637015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.637212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.637244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.637462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.637498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.637677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.637710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.637839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.637869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.638004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.638036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.638223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.638255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.638513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.638587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.638800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.638840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.639037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.639070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.639250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.639283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.639504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.639540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.639734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.639768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.011 [2024-11-18 13:10:02.640040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.011 [2024-11-18 13:10:02.640072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.011 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.640276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.640309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.640450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.640484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.640616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.640649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.640830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.640863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.641012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.641045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.641181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.641214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.641398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.641431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.641562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.641596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.641711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.641742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.641945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.641976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.642118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.642151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.642343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.642385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.642525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.642558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.642774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.642804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.642987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.643019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.643264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.643297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.643500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.643534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.643675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.643706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.643876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.643910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.644096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.644129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.296 qpair failed and we were unable to recover it. 00:27:05.296 [2024-11-18 13:10:02.644309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.296 [2024-11-18 13:10:02.644348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.644539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.644571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.644751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.644784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.644965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.644996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.645117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.645147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.645271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.645303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.645498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.645531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.645667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.645698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.645868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.645900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.646085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.646121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.646229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.646258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.646425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.646457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.646643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.646676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.646919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.646949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.647093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.647127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.647255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.647285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.647411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.647443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.647633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.647665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.647787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.647818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.647988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.648019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.648202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.648235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.648366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.648400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.648609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.648640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.648756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.648787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.648897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.648928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.649049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.649081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.649203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.649234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.649430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.649462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.649574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.649606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.649751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.649783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.649969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.650001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.650116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.650148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.650270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.650301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.297 [2024-11-18 13:10:02.650436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.297 [2024-11-18 13:10:02.650469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.297 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.650654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.650685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.650884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.650916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.651051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.651081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.651191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.651223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.651467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.651500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.651604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.651636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.651754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.651785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.651901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.651940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.652120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.652151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.652281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.652314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.652506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.652539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.652656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.652687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.652893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.652924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.653060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.653092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.653261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.653292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.653409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.653442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.653550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.653581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.653704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.653735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.653860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.653891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.654011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.654043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.654172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.654202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.654330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.654370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.654491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.654522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.654647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.654679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.654782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.654812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.654998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.655029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.655201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.655233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.655423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.655456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.655606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.655637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.298 [2024-11-18 13:10:02.655746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.298 [2024-11-18 13:10:02.655776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.298 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.655892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.655923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.656113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.656145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.656251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.656281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.656413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.656446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.656617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.656655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.656764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.656795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.656908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.656938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.657069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.657101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.657221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.657252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.657494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.657527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.657703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.657734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.657925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.657956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.658069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.658100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.658296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.658328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.658481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.658514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.658704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.658736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.658853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.658884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.659010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.659041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.659159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.659191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.659396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.659428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.659669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.659700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.659840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.659872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.660063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.660094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.660219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.660251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.660384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.660415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.660607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.660638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.660741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.660773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.660912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.660944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.661124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.661155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.299 [2024-11-18 13:10:02.661293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.299 [2024-11-18 13:10:02.661325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.299 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.661529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.661560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.661692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.661724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.661911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.661943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.662121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.662153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.662269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.662300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.662545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.662577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.662753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.662785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.662926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.662958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.663147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.663177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.663311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.663343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.663504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.663536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.663717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.663748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.664003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.664034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.664149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.664180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.664367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.664400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.664595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.664632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.664887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.664918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.665091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.665123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.665296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.665328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.665463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.665495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.665631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.665663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.665947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.665978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.666093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.666124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.666299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.666330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.666480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.666513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.666647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.666679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.666921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.666952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.667072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.667103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.667302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.667332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.667463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.667496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.667620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.667651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.667896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.667928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.300 qpair failed and we were unable to recover it. 00:27:05.300 [2024-11-18 13:10:02.668052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.300 [2024-11-18 13:10:02.668083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.668189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.668221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.668401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.668434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.668573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.668604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.668801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.668834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.668953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.668984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.669113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.669144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.669317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.669348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.669474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.669505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.669693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.669725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.669841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.669879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.669992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.670024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.670192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.670223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.670391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.670422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.670622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.670654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.670765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.670796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.670966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.670997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.671116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.671147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.671381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.671414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.671592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.671623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.671865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.671897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.672102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.672134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.672317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.672348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.672492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.672524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.672822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.672894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.301 qpair failed and we were unable to recover it. 00:27:05.301 [2024-11-18 13:10:02.673097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.301 [2024-11-18 13:10:02.673134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.673402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.673438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.673631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.673664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.673787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.673820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.674017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.674049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.674255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.674286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.674414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.674446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.674636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.674668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.674802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.674833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.674959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.674989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.675189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.675221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.675409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.675440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.675573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.675614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.675789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.675821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.676095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.676126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.676239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.676270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.676461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.676493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.676608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.676640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.676768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.676799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.676920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.676951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.677143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.677174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.677368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.677402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.677522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.677553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.677687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.677719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.677858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.677889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.678068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.678100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.678285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.678316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.678453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.678484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.678614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.678645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.678825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.678857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.679042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.679073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.679250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.302 [2024-11-18 13:10:02.679282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.302 qpair failed and we were unable to recover it. 00:27:05.302 [2024-11-18 13:10:02.679388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.679422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.679647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.679679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.679796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.679826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.679961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.679992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.680116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.680147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.680326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.680366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.680490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.680521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.680745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.680776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.680956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.680987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.681163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.681195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.681323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.681364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.681631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.681662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.681838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.681868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.682047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.682078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.682257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.682289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.682501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.682533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.682775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.682806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.682937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.682969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.683076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.683106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.683250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.683280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.683416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.683454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.683577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.683607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.683742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.683774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.683949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.683980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.684163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.684194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.684330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.684371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.684571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.684603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.684787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.684818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.684951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.684982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.685180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.685212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.685340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.685382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.685513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.685544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.685657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.685688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.685807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.303 [2024-11-18 13:10:02.685838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.303 qpair failed and we were unable to recover it. 00:27:05.303 [2024-11-18 13:10:02.685974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.686005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.686116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.686147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.686341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.686382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.686514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.686545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.686729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.686760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.686953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.686984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.687192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.687224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.687414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.687447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.687631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.687663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.687772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.687804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.687919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.687950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.688132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.688163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.688344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.688387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.688567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.688639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.688797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.688832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.688970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.689003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.689245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.689278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.689441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.689474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.689590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.689621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.689742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.689774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.689963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.689993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.690170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.690201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.690424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.690460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.690581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.690613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.690735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.690766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.690886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.690917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.691033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.691073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.691249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.691280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.691420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.691451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.304 qpair failed and we were unable to recover it. 00:27:05.304 [2024-11-18 13:10:02.691561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.304 [2024-11-18 13:10:02.691593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.691787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.691818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.691988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.692019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.692125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.692157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.692302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.692333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.692488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.692520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.692640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.692672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.692861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.692891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.693069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.693100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.693299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.693330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.693469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.693501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.693614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.693645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.693760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.693791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.693933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.693964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.694144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.694174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.694309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.694340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.694551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.694583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.694768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.694799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.695068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.695099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.695230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.695261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.695445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.695478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.695604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.695636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.695776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.695807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.696005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.696036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.696151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.696182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.696326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.696380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.696513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.696544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.696658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.696688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.696808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.696839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.697041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.697072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.697253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.697285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.697427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.697458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.697567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.697598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.697795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.697826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.305 [2024-11-18 13:10:02.697935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.305 [2024-11-18 13:10:02.697965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.305 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.698084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.698115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.698237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.698269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.698441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.698479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.698605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.698636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.698755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.698786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.698974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.699004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.699177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.699208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.699393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.699426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.699617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.699648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.699829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.699860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.699965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.699997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.700213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.700244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.700375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.700408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.700529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.700560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.700762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.700792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.700908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.700940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.701069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.701100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.701218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.701248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.701505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.701538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.701681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.701712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.701893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.701923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.702113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.702144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.702246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.702277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.702464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.702496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.702610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.702640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.702745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.702776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.702883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.306 [2024-11-18 13:10:02.702914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.306 qpair failed and we were unable to recover it. 00:27:05.306 [2024-11-18 13:10:02.703037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.703068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.703236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.703268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.703512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.703556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.703665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.703696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.703881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.703912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.704091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.704122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.704332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.704393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.704515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.704546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.704675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.704707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.704820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.704851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.704965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.704996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.705244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.705275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.705453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.705485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.705663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.705693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.705820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.705850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.705955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.705987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.706168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.706199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.706377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.706408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.706580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.706611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.706721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.706752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.706893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.706923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.707046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.707077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.707188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.707220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.707405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.707438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.707695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.707726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.707897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.707928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.708110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.708140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.708250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.307 [2024-11-18 13:10:02.708281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.307 qpair failed and we were unable to recover it. 00:27:05.307 [2024-11-18 13:10:02.708469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.708501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.708626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.708658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.708766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.708797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.709002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.709033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.710445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.710500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.710800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.710833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.710962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.710993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.711115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.711147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.711403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.711445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.711607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.711637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.711755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.711784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.711896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.711924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.712050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.712078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.712318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.712378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.712647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.712686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.712810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.712841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.712951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.712981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.713283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.713314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.713521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.713553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.713743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.713774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.713891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.713923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.714062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.714090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.714270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.714301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.714547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.714578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.714704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.714735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.714920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.714948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.715114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.715141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.715326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.715363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.715565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.715597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.715702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.308 [2024-11-18 13:10:02.715733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.308 qpair failed and we were unable to recover it. 00:27:05.308 [2024-11-18 13:10:02.715851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.715883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.715997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.716028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.716240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.716268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.716375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.716404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.716529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.716558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.716670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.716698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.716883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.716912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.717171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.717200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.717297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.717325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.717606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.717636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.717745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.717773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.717891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.717920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.718019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.718047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.718178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.718206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.718392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.718423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.718688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.718716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.718905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.718936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.719126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.719157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.719276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.719306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.719494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.719527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.719689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.719719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.719957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.719988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.720166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.720194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.720305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.720333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.720547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.720582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.720760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.720789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.720905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.720933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.721050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.721078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.721191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.721220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.721339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.721379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.721550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.721578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.721697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.721725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.721991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.722023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.722150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.722180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.309 qpair failed and we were unable to recover it. 00:27:05.309 [2024-11-18 13:10:02.722297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.309 [2024-11-18 13:10:02.722328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.722506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.722538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.722717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.722748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.722855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.722886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.723010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.723041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.723158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.723189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.723309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.723340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.723561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.723593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.723831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.723862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.724109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.724140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.724324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.724366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.724551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.724581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.724713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.724745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.724917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.724948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.725129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.725159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.725333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.725407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.725536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.725569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.725703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.725735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.725852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.725884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.726061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.726093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.726210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.726241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.726458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.726490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.726672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.726703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.726815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.726847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.727091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.727122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.727328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.727367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.727545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.727576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.727820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.727851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.727970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.728001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.728101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.728132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.728307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.728343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.728488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.728520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.310 qpair failed and we were unable to recover it. 00:27:05.310 [2024-11-18 13:10:02.728726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.310 [2024-11-18 13:10:02.728757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.729003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.729034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.729375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.729407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.729533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.729564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.729698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.729729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.729861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.729892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.730006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.730038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.730154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.730185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.730373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.730406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.730526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.730557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.730675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.730706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.730970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.731002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.731112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.731144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.731276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.731307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.731477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.731509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.731623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.731654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.731830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.731860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.731994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.732025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.732270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.732301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.732431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.732464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.732590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.732621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.732746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.732777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.733001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.733034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.733282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.733313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.733504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.733536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.733682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.733714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.733904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.733935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.734064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.734095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.734276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.734307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.734438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.734469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.734592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.734622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.734749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.734780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.734895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.734925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.311 qpair failed and we were unable to recover it. 00:27:05.311 [2024-11-18 13:10:02.735055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.311 [2024-11-18 13:10:02.735086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.735200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.735230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.735403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.735435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.735550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.735581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.735814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.735846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.736035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.736072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.736258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.736289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.736427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.736459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.736667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.736699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.736832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.736863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.737043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.737074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.737204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.737235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.737361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.737393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.737577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.737608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.737786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.737818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.737997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.738029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.738207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.738238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.738398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.738431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.738622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.738653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.738770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.738803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.738991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.739022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.739140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.739171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.739372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.739405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.739603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.739633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.739758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.739789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.739926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.739957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.740140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.740170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.312 qpair failed and we were unable to recover it. 00:27:05.312 [2024-11-18 13:10:02.740272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.312 [2024-11-18 13:10:02.740303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.740498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.740530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.740632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.740663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.740779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.740810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.740914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.740944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.741192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.741222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.741467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.741499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.741610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.741640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.741821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.741852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.741973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.742003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.742134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.742166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.742268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.742299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.742506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.742537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.742655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.742687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.742802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.742832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.742945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.742975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.743157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.743189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.743302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.743332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.743458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.743495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.743625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.743656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.743825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.743856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.743991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.744022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.744265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.744296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.744431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.744463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.744633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.744664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.744840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.744870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.744988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.745019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.745189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.745220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.745324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.745363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.745606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.745637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.745823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.745854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.745961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.745990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.746177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.746208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.746393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.746426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.746564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.313 [2024-11-18 13:10:02.746595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.313 qpair failed and we were unable to recover it. 00:27:05.313 [2024-11-18 13:10:02.746718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.746749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.746859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.746889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.747149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.747180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.747283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.747314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.747442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.747473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.747609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.747640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.747757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.747788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.747980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.748010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.748217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.748249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.748363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.748396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.748608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.748639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.748762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.748793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.748940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.748971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.749237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.749268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.749390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.749423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.749555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.749586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.749691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.749722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.749911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.749942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.750057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.750087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.750211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.750242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.750366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.750398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.750510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.750542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.750653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.750683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.750799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.750837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.751015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.751046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.751179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.751210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.751390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.751422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.751559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.751589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.751701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.751731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.751841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.751871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.751981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.752011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.752149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.752179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.752309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.752340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.314 qpair failed and we were unable to recover it. 00:27:05.314 [2024-11-18 13:10:02.752477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.314 [2024-11-18 13:10:02.752508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.752614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.752644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.752827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.752859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.752981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.753011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.753119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.753150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.753264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.753295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.753484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.753515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.753741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.753772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.753877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.753909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.754008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.754038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.754146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.754177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.754292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.754323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.754499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.754570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.754846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.754882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.755004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.755036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.755284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.755316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.755452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.755485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.755654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.755720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.755881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.755917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.756043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.756075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.756197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.756229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.756345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.756394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.756524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.756556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.756742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.756772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.756900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.756931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.757131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.757163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.757283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.757314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.757535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.757568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.757689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.757720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.315 [2024-11-18 13:10:02.757829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.315 [2024-11-18 13:10:02.757860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.315 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.757987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.758027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.758175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.758206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.758313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.758343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.758487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.758518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.758658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.758689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.758795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.758826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.758940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.758971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.759162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.759192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.759381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.759413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.759544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.759575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.759689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.759719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.759833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.759864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.759977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.760008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.760131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.760162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.760276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.760308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.760444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.760476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.760701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.760732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.760834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.760864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.761105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.761137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.761247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.761277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.761455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.761486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.761694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.761725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.761910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.761941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.762043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.762074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.762193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.762225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.762346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.762387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.762630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.762661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.762781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.762813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.762921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.762951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.763062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.763093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.763205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.763236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.763369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.763401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.316 qpair failed and we were unable to recover it. 00:27:05.316 [2024-11-18 13:10:02.763578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.316 [2024-11-18 13:10:02.763609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.763757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.763788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.763918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.763949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.764135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.764166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.764348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.764392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.764521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.764552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.764676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.764707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.764825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.764856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.765027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.765064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.765187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.765219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.765342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.765388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.765575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.765607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.765728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.765760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.765875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.765905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.766009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.766040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.766171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.766202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.766318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.766349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.766478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.766509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.766623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.766654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.766781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.766812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.766930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.766962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.767089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.767120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.767256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.767288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.767461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.767493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.767601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.767633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.767753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.767784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.767962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.767993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.768106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.768137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.768257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.768288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.768430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.768462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.768636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.768667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.768847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.768879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.769074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.769105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.317 [2024-11-18 13:10:02.769229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.317 [2024-11-18 13:10:02.769259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.317 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.769438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.769470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.769636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.769708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.769920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.769955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.770076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.770109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.770287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.770320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.770443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.770476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.770608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.770639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.770882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.770915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.771024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.771055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.771184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.771215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.771390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.771425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.771599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.771630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.771777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.771809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.771913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.771945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.772052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.772083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.772293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.772325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.772461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.772494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.772606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.772636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.772748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.772780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.772893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.772924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.773038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.773070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.773269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.773301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.773435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.773467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.773592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.773624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.773810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.318 [2024-11-18 13:10:02.773842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.318 qpair failed and we were unable to recover it. 00:27:05.318 [2024-11-18 13:10:02.774018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.774049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.774257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.774289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.774475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.774508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.774627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.774664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.774788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.774820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.774940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.774971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.775103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.775135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.775239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.775270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.775413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.775447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.775685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.775716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.775837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.775868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.775996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.776027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.776203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.776235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.776416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.776449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.776573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.776605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.776781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.776811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.776926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.776958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.777168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.777200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.777343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.777383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.777498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.777529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.777645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.777677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.777850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.777881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.778052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.778084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.778278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.778309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.778444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.778476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.778651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.778683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.778871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.778903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.779083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.779113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.779253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.779285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.779483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.779516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.779631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.779669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.779852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.779885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.319 qpair failed and we were unable to recover it. 00:27:05.319 [2024-11-18 13:10:02.780132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.319 [2024-11-18 13:10:02.780163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.780270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.780302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.780454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.780487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.780678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.780710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.780951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.780982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.781101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.781132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.781328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.781370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.781567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.781599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.781844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.781876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.782059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.782090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.782285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.782316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.782447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.782479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.782633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.782684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.782820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.782853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.782975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.783006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.783223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.783254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.783379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.783412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.783539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.783572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.783705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.783736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.783852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.783883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.784018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.784049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.784162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.784194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.784375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.784408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.784591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.784622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.784871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.784903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.785033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.785073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.785292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.785322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.785447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.785480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.785609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.785640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.785768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.785799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.320 qpair failed and we were unable to recover it. 00:27:05.320 [2024-11-18 13:10:02.785995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.320 [2024-11-18 13:10:02.786025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.786208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.786239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.786426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.786458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.786570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.786600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.786781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.786812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.786933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.786963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.787075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.787106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.787295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.787326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.787604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.787636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.787757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.787788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.787900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.787931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.788067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.788098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.788235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.788266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.788534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.788567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.788682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.788713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.788894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.788926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.789059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.789090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.789291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.789323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.789575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.789611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.789746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.789778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.789892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.789925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.790101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.790132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.790256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.790294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.790503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.790535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.790649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.790681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.790814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.790845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.790968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.791000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.791175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.791206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.791396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.791429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.791622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.791654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.791819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.791851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.321 [2024-11-18 13:10:02.792077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.321 [2024-11-18 13:10:02.792108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.321 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.792233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.792264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.792390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.792422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.792534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.792566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.792671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.792701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.792889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.792921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.793095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.793126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.793319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.793350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.793540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.793572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.793814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.793845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.794026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.794058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.794178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.794209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.794340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.794382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.794499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.794530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.794836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.794868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.794994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.795026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.795142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.795173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.795381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.795414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.795541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.795579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.795700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.795730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.795913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.795945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.796068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.796100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.796214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.796245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.796348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.796396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.796527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.796558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.796755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.796787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.796973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.797004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.797204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.797236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.797424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.797457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.797563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.797594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.797782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.797814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.797926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.797958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.798088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.798122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.798261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.322 [2024-11-18 13:10:02.798292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.322 qpair failed and we were unable to recover it. 00:27:05.322 [2024-11-18 13:10:02.798554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.798586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.798695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.798726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.798838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.798869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.799043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.799074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.799192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.799222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.799396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.799428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.799550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.799582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.799754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.799785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.800007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.800038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.800219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.800251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.800439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.800471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.800691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.800728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.800842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.800873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.800990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.801021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.801212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.801242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.801366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.801398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.801515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.801546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.801730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.801760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.801878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.801909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.802030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.802061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.802312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.802343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.802471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.802503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.802606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.802637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.802840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.802871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.802976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.803008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.803195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.803226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.323 qpair failed and we were unable to recover it. 00:27:05.323 [2024-11-18 13:10:02.803343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.323 [2024-11-18 13:10:02.803385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.803516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.803547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.803790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.803822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.803955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.803985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.804106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.804137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.804254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.804285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.804394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.804426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.804641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.804672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.804861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.804892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.805017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.805047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.805158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.805188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.805375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.805409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.805599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.805632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.805815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.805847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.805968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.805998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.806178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.806209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.806315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.806346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.806472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.806503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.806618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.806650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.806773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.806804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.807001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.807033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.807150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.807181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.807426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.807458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.807576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.807606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.807791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.807822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.808060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.808098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.808268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.808299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.808433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.808465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.808641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.808673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.808855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.808886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.809108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.809139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.809320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.324 [2024-11-18 13:10:02.809364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.324 qpair failed and we were unable to recover it. 00:27:05.324 [2024-11-18 13:10:02.809480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.809512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.809698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.809729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.809906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.809937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.810124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.810156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.810272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.810303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.810444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.810476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.810590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.810620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.810749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.810781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.810962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.810993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.811098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.811129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.811259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.811290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.811492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.811523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.811632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.811663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.811778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.811809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.811929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.811960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.812067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.812098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.812230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.812262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.812378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.812415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.812517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.812547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.812722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.812753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.812987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.813058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.813244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.813317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.813502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.813569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.813727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.813764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.813869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.813900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.814056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.814087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.814333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.814374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.814559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.814591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.814761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.814792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.814983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.815014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.815216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.815247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.815437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.815470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.325 [2024-11-18 13:10:02.815589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.325 [2024-11-18 13:10:02.815620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.325 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.815745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.815782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.815886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.815917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.816040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.816071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.816219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.816250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.816370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.816403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.816515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.816547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.816664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.816695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.816934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.816965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.817094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.817125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.817225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.817256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.817475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.817508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.817642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.817673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.817845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.817876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.817996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.818029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.818157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.818188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.818453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.818486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.818672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.818703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.818824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.818856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.818986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.819017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.819128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.819159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.819372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.819405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.819529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.819560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.819737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.819768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.819924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.819955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.820064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.820096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.820203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.820234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.820361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.820394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.820526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.820568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.820702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.820735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.820868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.820900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.821038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.326 [2024-11-18 13:10:02.821070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.326 qpair failed and we were unable to recover it. 00:27:05.326 [2024-11-18 13:10:02.821176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.821208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.821393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.821428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.821538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.821569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.821840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.821871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.821996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.822027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.822143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.822175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.822293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.822324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.822512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.822544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.822735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.822767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.822891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.822932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.823131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.823162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.823279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.823310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.823434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.823466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.823577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.823608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.823711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.823742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.823928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.823959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.824083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.824114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.824232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.824263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.824441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.824473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.824591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.824622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.824738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.824767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.824888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.824919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.825097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.825128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.825240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.825271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.825389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.825422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.825540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.825571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.825814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.825844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.825957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.825989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.826111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.826142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.826259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.826290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.826403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.826436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.826577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.826609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.826730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.826760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.327 [2024-11-18 13:10:02.826871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.327 [2024-11-18 13:10:02.826901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.327 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.827078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.827110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.827235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.827266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.827611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.827685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.827950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.827996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.828126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.828162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.828433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.828468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.828576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.828608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.828721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.828752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.828963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.828994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.829168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.829200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.829389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.829421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.829605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.829636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.829739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.829771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.829967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.829997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.830122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.830153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.830407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.830441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.830645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.830677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.830864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.830895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.831073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.831104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.831288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.831320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.831524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.831557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.831670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.831702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.831883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.831914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.832036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.832067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.832181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.832212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.832459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.832492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.832686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.832717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.832857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.832888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.833001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.833033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.833260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.833293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.833501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.833533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.833711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.328 [2024-11-18 13:10:02.833742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.328 qpair failed and we were unable to recover it. 00:27:05.328 [2024-11-18 13:10:02.833866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.833897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.834082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.834113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.834303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.834334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.834454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.834485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.834607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.834638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.834808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.834840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.835020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.835051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.835240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.835272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.835454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.835486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.835621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.835652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.835852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.835889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.836027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.836058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.836183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.836214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.836386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.836418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.836541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.836573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.836705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.836736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.836950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.836981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.837115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.837147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.837330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.837370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.837569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.837601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.837718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.837749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.837933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.837965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.838203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.838235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.838390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.838423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.838543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.838574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.838750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.838781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.838898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.329 [2024-11-18 13:10:02.838929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.329 qpair failed and we were unable to recover it. 00:27:05.329 [2024-11-18 13:10:02.839104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.839135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.839320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.839360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.839469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.839500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.839719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.839750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.839923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.839954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.840080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.840112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.840367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.840401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.840513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.840543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.840717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.840748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.840872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.840904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.841031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.841062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.841239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.841269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.841390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.841423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.841553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.841585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.841699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.841730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.841916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.841947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.842118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.842150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.842274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.842306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.842530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.842562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.842749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.842780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.842968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.842999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.843185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.843215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.843347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.843387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.843564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.843604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.843714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.843746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.843862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.843894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.844009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.844042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.844172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.844203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.844307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.844339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.844534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.844566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.844681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.844712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.844901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.844933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.330 [2024-11-18 13:10:02.845112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.330 [2024-11-18 13:10:02.845143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.330 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.845262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.845293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.845476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.845508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.845636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.845668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.845911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.845942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.846051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.846083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.846254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.846286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.846484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.846517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.846695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.846726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.846895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.846926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.847055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.847086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.847260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.847291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.847500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.847533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.847736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.847767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.847900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.847932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.848045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.848076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.848266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.848298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.848415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.848448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.848576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.848607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.848728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.848759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.848876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.848909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.849116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.849147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.849275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.849306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.849530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.849563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.849668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.849699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.849878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.849910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.850111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.850144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.850322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.850364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.850484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.850516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.850632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.850663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.850838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.850869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.851042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.331 [2024-11-18 13:10:02.851080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.331 qpair failed and we were unable to recover it. 00:27:05.331 [2024-11-18 13:10:02.851299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.851330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.851516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.851548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.851675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.851706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.851881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.851912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.852033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.852063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.852188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.852219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.852329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.852369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.852563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.852593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.852736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.852767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.852941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.852972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.853149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.853180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.853371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.853404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.853543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.853574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.853707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.853738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.853851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.853882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.854033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.854062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.854235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.854266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.854396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.854431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.854601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.854632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.854757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.854788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.854921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.854952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.855065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.855096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.855267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.855298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.855504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.855538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.855649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.855679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.855785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.855815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.855926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.855959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.856137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.856168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.856293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.856324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.856447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.856479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.856661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.856692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.856793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.332 [2024-11-18 13:10:02.856824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.332 qpair failed and we were unable to recover it. 00:27:05.332 [2024-11-18 13:10:02.856944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.856974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.857156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.857187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.857291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.857322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.857507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.857538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.857668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.857699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.857818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.857849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.857959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.857990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.858172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.858209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.858329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.858374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.858649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.858680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.858812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.858844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.859038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.859070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.859195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.859225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.859372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.859404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.859666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.859698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.859818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.859850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.859952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.859983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.860087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.860117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.860246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.860276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.860464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.860496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.860620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.860651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.860876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.860907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.861075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.861106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.861226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.861258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.861385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.861417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.861530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.861560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.861676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.861707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.861882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.861914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.862156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.862187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.862303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.862335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.862484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.862516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.862624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.862655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.333 [2024-11-18 13:10:02.862768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.333 [2024-11-18 13:10:02.862799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.333 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.862909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.862940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.863131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.863162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.863280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.863311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.863543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.863575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.863701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.863733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.863872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.863903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.864014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.864045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.864154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.864185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.864301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.864332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.864561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.864594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.864840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.864871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.865004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.865035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.865162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.865193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.865312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.865343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.865542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.865579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.865800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.865831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.865942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.865974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.866155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.866186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.866291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.866323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.866447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.866480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.866664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.866695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.866962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.866993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.867122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.867154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.867299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.867330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.867524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.334 [2024-11-18 13:10:02.867555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.334 qpair failed and we were unable to recover it. 00:27:05.334 [2024-11-18 13:10:02.867738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.867770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.867886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.867917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.868032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.868063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.868240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.868272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.868393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.868427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.868558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.868589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.868720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.868750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.868920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.868952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.869062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.869093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.869214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.869245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.869363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.869396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.869640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.869672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.869775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.869806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.869977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.870008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.870184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.870215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.870336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.870396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.870522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.870554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.870679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.870711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.870987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.871019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.871146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.871177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.871307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.871338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.871534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.871566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.871775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.871805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.871980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.872011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.872226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.872258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.872373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.872406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.872579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.872611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.872810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.872842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.872991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.873022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.873146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.873184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.873362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.873395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.873502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.873533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.335 [2024-11-18 13:10:02.873649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.335 [2024-11-18 13:10:02.873681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.335 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.873858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.873889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.874010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.874040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.874167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.874198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.874386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.874418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.874668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.874698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.874940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.874971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.875141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.875171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.875293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.875324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.875489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.875537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.875658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.875692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.875876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.875908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.876116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.876148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.876330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.876378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.876496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.876527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.876725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.876756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.876953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.876985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.877103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.877135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.877258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.877289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.877475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.877508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.877621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.877652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.877758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.877789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.877973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.878005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.878129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.878161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.878289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.878323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.878528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.878558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.878746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.878779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.878898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.878929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.879032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.879063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.879243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.879276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.879530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.879562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.879666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.879697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.879875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.879906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.880023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.880054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.336 [2024-11-18 13:10:02.880292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.336 [2024-11-18 13:10:02.880324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.336 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.880458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.880491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.880760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.880791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.880900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.880938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.881076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.881108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.881269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.881300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.881450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.881481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.881586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.881617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.881744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.881775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.881960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.881991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.882100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.882130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.882254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.882285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.882397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.882429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.882539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.882570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.882705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.882736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.882857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.882888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.883009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.883040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.883176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.883208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.883404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.883437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.883548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.883579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.883691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.883722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.883910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.883940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.884059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.884089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.884196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.884226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.884366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.884399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.884508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.884538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.884642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.884673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.884785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.884815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.884995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.885026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.885126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.885156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.885465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.885537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.885732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.885799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.886003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.886038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.337 [2024-11-18 13:10:02.886165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.337 [2024-11-18 13:10:02.886200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.337 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.886376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.886411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.886606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.886638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.886897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.886929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.887105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.887136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.887253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.887284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.887408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.887443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.887555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.887586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.887783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.887814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.888053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.888084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.888325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.888375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.888498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.888530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.888648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.888679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.888861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.888892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.889081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.889113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.889314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.889343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.889596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.889627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.889755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.889786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.889904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.889934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.890047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.890078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.890210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.890241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.890409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.890441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.890611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.890642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.890772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.890804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.890946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.890978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.891085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.891116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.891257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.891288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.891404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.891434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.891611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.891641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.891824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.891855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.892064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.892093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.892208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.892239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.892420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.892452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.892581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.338 [2024-11-18 13:10:02.892612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.338 qpair failed and we were unable to recover it. 00:27:05.338 [2024-11-18 13:10:02.892784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.892815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.893001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.893032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.893155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.893186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.893342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.893427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.893636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.893678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.893794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.893825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.893950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.893981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.894092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.894124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.894304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.894335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.894470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.894502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.894631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.894663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.894782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.894813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.895006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.895038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.895148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.895179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.895390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.895424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.895531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.895562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.895674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.895711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.895828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.895859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.896038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.896069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.896178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.896209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.896382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.896415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.896588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.896618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.896855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.896885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.897003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.897034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.897143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.897174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.897297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.897328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.897533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.897565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.897775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.897806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.897930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.897961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.898149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.339 [2024-11-18 13:10:02.898181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.339 qpair failed and we were unable to recover it. 00:27:05.339 [2024-11-18 13:10:02.898385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.898418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.898520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.898550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.898664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.898695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.898821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.898852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.899032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.899062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.899248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.899279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.899481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.899513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.899716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.899749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.899874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.899905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.900017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.900047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.900238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.900269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.900465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.900497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.900602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.900633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.900780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.900821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.900938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.900971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.901171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.901203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.901315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.901347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.901485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.901518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.901628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.901659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.901762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.901794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.901934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.901966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.902147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.902179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.902292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.902324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.902640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.902710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.902873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.902919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.903051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.903083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.903195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.903227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.903342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.903391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.903592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.903623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.903823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.903854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.904070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.904102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.904228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.904258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.904394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.904427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.340 [2024-11-18 13:10:02.904533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.340 [2024-11-18 13:10:02.904565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.340 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.904758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.904789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.904975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.905007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.905244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.905275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.905411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.905443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.905577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.905608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.905720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.905752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.905984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.906016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.906126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.906157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.906267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.906298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.906521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.906554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.906658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.906690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.906799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.906830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.906947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.906978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.907103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.907135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.907257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.907288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.907408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.907441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.907559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.907589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.907840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.907871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.908060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.908090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.908217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.908255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.908373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.908405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.908612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.908643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.908795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.908825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.908934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.908965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.909152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.909183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.909304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.909335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.909521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.909552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.909729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.909760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.909880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.909911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.910099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.910130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.910344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.910387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.910598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.910631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.910901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.910931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.911090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.911121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.911234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.341 [2024-11-18 13:10:02.911264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.341 qpair failed and we were unable to recover it. 00:27:05.341 [2024-11-18 13:10:02.911391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.911423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.911602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.911633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.911811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.911842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.912032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.912063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.912194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.912225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.912348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.912390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.912503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.912534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.912641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.912671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.912799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.912829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.913004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.913036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.913208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.913239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.913383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.913419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.913533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.913565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.913746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.913778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.913959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.913991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.914103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.914135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.914261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.914292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.914424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.914456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.914577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.914609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.914719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.914751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.914936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.914969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.915191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.915223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.915417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.915450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.915695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.915727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.915851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.915882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.916085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.916117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.916222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.916254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.916432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.916465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.916589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.916621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.916806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.916838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.916952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.916984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.917112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.917144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.917254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.917287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.342 qpair failed and we were unable to recover it. 00:27:05.342 [2024-11-18 13:10:02.917415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.342 [2024-11-18 13:10:02.917448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.917573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.917605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.917790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.917822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.918000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.918032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.918274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.918307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.918437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.918475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.918598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.918629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.918745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.918777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.918916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.918947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.919152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.919183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.919307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.919339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.919534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.919566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.919680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.919712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.919889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.919920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.920038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.920070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.920198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.920230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.920406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.920440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.920557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.920588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.920758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.920790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.920912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.920944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.921052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.921084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.921258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.921289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.921428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.921462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.921584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.921615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.921793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.921824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.921935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.921968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.922080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.922112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.922309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.922341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.922528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.922560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.922735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.922767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.922944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.922976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.923159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.923190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.923319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.923367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.923489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.343 [2024-11-18 13:10:02.923520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.343 qpair failed and we were unable to recover it. 00:27:05.343 [2024-11-18 13:10:02.923697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.923728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.923937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.923968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.924101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.924132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.924390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.924424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.924621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.924653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.924912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.924944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.925138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.925170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.925372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.925404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.925536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.925569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.925699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.925731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.925857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.925889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.926005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.926037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.926178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.926210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.926386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.926419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.926578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.926610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.926792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.926824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.926948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.926980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.927151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.927183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.927380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.927413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.927596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.927627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.927756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.927788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.927903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.927934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.928060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.928091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.928269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.928300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.928494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.928527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.928648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.928679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.928861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.928892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.929023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.929055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.929214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.929245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.929369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.929402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.929599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.929631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.929745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.344 [2024-11-18 13:10:02.929777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.344 qpair failed and we were unable to recover it. 00:27:05.344 [2024-11-18 13:10:02.929952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.929983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.930111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.930143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.930252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.930284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.930463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.930496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.930625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.930657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.930850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.930881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.931011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.931042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.931214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.931252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.931374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.931406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.931580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.931612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.931750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.931782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.931885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.931917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.932087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.932119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.932223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.932255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.932421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.932454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.932647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.932679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.932879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.932911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.933027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.933057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.933175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.933206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.933328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.933395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.933575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.933606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.933718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.933750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.933876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.933907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.934087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.934119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.934223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.934254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.934383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.934416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.934532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.934564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.934773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.934806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.935009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.935040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.935212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.935243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.935488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.935520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.935763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.935794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.935969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.936000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.936126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.936158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.936366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.936400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.936598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.936630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.345 qpair failed and we were unable to recover it. 00:27:05.345 [2024-11-18 13:10:02.936749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-11-18 13:10:02.936781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.936902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.936933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.937114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.937146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.937266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.937298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.937512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.937545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.937664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.937696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.937830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.937861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.938123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.938155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.938331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.938373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.938615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.938647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.938756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.938788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.939067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.939098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.939235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.939267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.939498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.939531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.939705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.939736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.939999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.940031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.940228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.940259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.940437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.940469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.940598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.940630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.940760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.940792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.940968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.940999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.941107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.941138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.941327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.941368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.941487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.941518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.941628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.941660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.941849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.941882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.942136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.942168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.942293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.942324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.942482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.942515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.942623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.942654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.942856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.942887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.943014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.943045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.943180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.943212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.943382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.943415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.943538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.943569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.943763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.943794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.943967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.943998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.944114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.944145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.944366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.346 [2024-11-18 13:10:02.944400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.346 qpair failed and we were unable to recover it. 00:27:05.346 [2024-11-18 13:10:02.944594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.944631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.944762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.944794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.944978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.945010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.945202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.945234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.945412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.945445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.945553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.945583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.945724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.945756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.945864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.945895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.946011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.946043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.946165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.946196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.946382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.946415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.946521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.946552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.946726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.946757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.946967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.946998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.947122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.947154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.947326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.947368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.947560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.947591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.947831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.947862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.948040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.948075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.948208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.948241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.948412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.948445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.948687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.948719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.948832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.948864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.948973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.949005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.949266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.949298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.949511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.949544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.949646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.949676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.949812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.949843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.949970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.950002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.950133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.950163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.950274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.950306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.950456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.950489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.950670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.950701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.950820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.950851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.950988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.951020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.951124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.951155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.951327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.951371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.951557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.951589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.951708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.951739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.951930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.951962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.952088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.952119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.952262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.347 [2024-11-18 13:10:02.952320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.347 qpair failed and we were unable to recover it. 00:27:05.347 [2024-11-18 13:10:02.952466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.952501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.952680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.952711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.952894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.952925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.953037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.953068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.953264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.953295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.953504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.953536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.953663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.953694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.953883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.953914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.954095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.954126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.954316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.954346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.954473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.954504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.954687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.954719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.954912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.954943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.955190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.955222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.955428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.955460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.955643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.955674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.955798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.955829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.955954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.955985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.956115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.956146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.956372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.956404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.956537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.956568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.956689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.956720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.956835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.956866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.957031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.957063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.957244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.957274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.957380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.957412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.957589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.957620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.957751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.957783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.957962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.957993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.958120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.958150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.958400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.958432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.958546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.958577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.958758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.958789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.958917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.958948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.959062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.959093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.959193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.959224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.959346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.959391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.959514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.959545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.959648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.959680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.959789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.959827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.348 qpair failed and we were unable to recover it. 00:27:05.348 [2024-11-18 13:10:02.959960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.348 [2024-11-18 13:10:02.959991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.960195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.960226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.960406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.960438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.960544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.960575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.960700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.960731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.960848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.960879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.960996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.961027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.961268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.961299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.961503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.961536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.961723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.961753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.961871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.961903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.962024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.962055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.962163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.962193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.962307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.962339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.962460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.962493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.962592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.962622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.962740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.962771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.962893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.962924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.963032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.963062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.963183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.963214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.963322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.963360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.963539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.963570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.963759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.963790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.963916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.963947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.964059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.964089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.964216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.964247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.964378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.964411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.964553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.964585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.964708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.964739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.964869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.964899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.965026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.965058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.965238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.349 [2024-11-18 13:10:02.965268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.349 qpair failed and we were unable to recover it. 00:27:05.349 [2024-11-18 13:10:02.965391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.965434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.965625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.965656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.965837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.965868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.965978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.966009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.966200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.966230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.966346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.966402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.966534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.966566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.966753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.966791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.966897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.966927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.967064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.967095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.967202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.967232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.967364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.967396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.967514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.967545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.967671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.967702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.967826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.967857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.967983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.968014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.968125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.968156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.968268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.968298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.968434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.968467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.350 [2024-11-18 13:10:02.968657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.350 [2024-11-18 13:10:02.968689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.350 qpair failed and we were unable to recover it. 00:27:05.631 [2024-11-18 13:10:02.968887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-11-18 13:10:02.968918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-11-18 13:10:02.969053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-11-18 13:10:02.969087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-11-18 13:10:02.969331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-11-18 13:10:02.969373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-11-18 13:10:02.969570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-11-18 13:10:02.969601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-11-18 13:10:02.969777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-11-18 13:10:02.969808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-11-18 13:10:02.969934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-11-18 13:10:02.969965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-11-18 13:10:02.970078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-11-18 13:10:02.970109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-11-18 13:10:02.970292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-11-18 13:10:02.970323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-11-18 13:10:02.970550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-11-18 13:10:02.970620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-11-18 13:10:02.972108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-11-18 13:10:02.972166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-11-18 13:10:02.972382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-11-18 13:10:02.972421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.631 qpair failed and we were unable to recover it. 00:27:05.631 [2024-11-18 13:10:02.972543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.631 [2024-11-18 13:10:02.972576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.972723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.972755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.972945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.972978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.973112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.973146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.973285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.973317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.973592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.973625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.973756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.973789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.973900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.973932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.974090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.974122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.974249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.974281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.974397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.974430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.974679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.974710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.974909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.974941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.975052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.975084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.975285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.975317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.975525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.975558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.975734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.975765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.975907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.975939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.976158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.976190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.976376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.976411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.976610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.976641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.978062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.978113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.978378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.978414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.978603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.978636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.978927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.978959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.979129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.979162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.979303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.979334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.979541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.979574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.979698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.979729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.979903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.979934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.980163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.980201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.980332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.980378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.980585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.980617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.980809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.980840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.980943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.980975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.981086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.981118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.981309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.981341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.632 [2024-11-18 13:10:02.981542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.632 [2024-11-18 13:10:02.981574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.632 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.981688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.981720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.981991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.982022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.982134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.982165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.982373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.982406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.982529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.982560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.982769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.982801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.982999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.983030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.983228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.983260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.983432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.983467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.983788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.983821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.984005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.984037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.984304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.984335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.984641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.984673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.984792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.984824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.985053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.985083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.985219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.985249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.985389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.985421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.985537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.985569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.985747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.985779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.985949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.985987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.986177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.986208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.986398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.986430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.986550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.986581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.986697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.986728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.986912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.986944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.987122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.987153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.987288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.987319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.987511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.987544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.987717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.987748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.987947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.987979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.988246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.988277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.988395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.988427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.988691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.988722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.988998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.989031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.989165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.989196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.989377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.989410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.989619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.989651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.989932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.633 [2024-11-18 13:10:02.989964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.633 qpair failed and we were unable to recover it. 00:27:05.633 [2024-11-18 13:10:02.990080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.990112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.990239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.990271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.990390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.990423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.990631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.990662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.990844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.990876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.991018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.991049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.991169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.991200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.991407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.991442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.991566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.991599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.991801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.991833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.992051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.992083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.992201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.992233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.992421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.992453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.992657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.992689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.992803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.992835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.993027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.993058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.993285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.993318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.993531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.993563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.993746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.993778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.993909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.993940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.994124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.994156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.994279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.994311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.994464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.994503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.994696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.994728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.994841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.994873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.995012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.995043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.995177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.995209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.995318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.995349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.995469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.995501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.995685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.995716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.995886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.995918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.996095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.996125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.996247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.996278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.996399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.996432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.996637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.996669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.996836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.996867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.996979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.997011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.997129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.634 [2024-11-18 13:10:02.997161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.634 qpair failed and we were unable to recover it. 00:27:05.634 [2024-11-18 13:10:02.997345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:02.997387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:02.997565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:02.997597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:02.997718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:02.997749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:02.997953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:02.997986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:02.998205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:02.998237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:02.998420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:02.998451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:02.998636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:02.998668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:02.998849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:02.998881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:02.999003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:02.999035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:02.999146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:02.999178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:02.999444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:02.999476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:02.999723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:02.999760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:02.999865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:02.999897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.000154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.000187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.000428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.000461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.000631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.000664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.000770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.000802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.000942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.000974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.001090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.001123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.001229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.001260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.001520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.001552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.001671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.001703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.001927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.001958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.002071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.002103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.002219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.002251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.002498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.002570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.002803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.002839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.002969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.003002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.003128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.003160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.003276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.003307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.003437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.003470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.003648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.003680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.003861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.003893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.004009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.004039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.004148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.004179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.004375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.004409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.004544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.004576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.004698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.635 [2024-11-18 13:10:03.004729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.635 qpair failed and we were unable to recover it. 00:27:05.635 [2024-11-18 13:10:03.004932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.004974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.005100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.005132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.005303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.005334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.005526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.005558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.005803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.005834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.005953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.005984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.006284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.006316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.006443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.006475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.006590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.006620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.006813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.006843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.007092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.007124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.007318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.007349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.007495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.007528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.007632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.007663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.007796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.007827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.008015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.008046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.008153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.008185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.008300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.008332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.008447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.008479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.008602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.008634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.008816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.008847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.009035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.009067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.009254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.009285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.009470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.009503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.009625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.009657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.009778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.009809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.009984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.010015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.010132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.010164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.010288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.010319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.010447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.010483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.010595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.010626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.636 qpair failed and we were unable to recover it. 00:27:05.636 [2024-11-18 13:10:03.010743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.636 [2024-11-18 13:10:03.010775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.010964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.010996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.011106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.011137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.011376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.011408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.011645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.011677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.011870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.011902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.012094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.012126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.012267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.012299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.012519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.012552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.012751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.012783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.012933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.012965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.013081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.013113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.013239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.013271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.013446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.013479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.013600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.013632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.013875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.013908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.014022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.014054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.014181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.014213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.014323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.014367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.014494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.014526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.014657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.014689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.014929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.014960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.015135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.015167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.015350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.015397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.015542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.015575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.015700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.015732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.015853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.015884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.015993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.016026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.016223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.016255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.016433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.016465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.016676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.016708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.016908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.016940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.017145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.017177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.017301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.017332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.017450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.017482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.017662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.017693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.019517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.019575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.019782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.019818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.637 [2024-11-18 13:10:03.020072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.637 [2024-11-18 13:10:03.020105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.637 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.020280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.020312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.020522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.020556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.020690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.020722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.020842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.020873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.021020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.021053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.021171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.021204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.021390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.021424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.021559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.021591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.021719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.021751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.021860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.021891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.022020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.022051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.022167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.022206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.022396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.022429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.022624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.022656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.022776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.022808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.022933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.022964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.023083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.023114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.023224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.023256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.023431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.023463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.023576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.023608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.023812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.023843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.024061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.024092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.024221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.024253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.024384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.024416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.024606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.024638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.024957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.025027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.025225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.025293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.027174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.027235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.027425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.027462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.027652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.027684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.029051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.029103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.029239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.029272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.029456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.029489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.029681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.029713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.029846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.029877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.030049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.030080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.030195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.030226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.030410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.638 [2024-11-18 13:10:03.030442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.638 qpair failed and we were unable to recover it. 00:27:05.638 [2024-11-18 13:10:03.030567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.030608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.030801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.030834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.030957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.030988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.031091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.031123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.031239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.031271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.031409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.031442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.031625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.031657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.031833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.031864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.031974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.032006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.032112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.032144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.032327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.032369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.032545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.032576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.032700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.032732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.032931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.032962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.033088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.033120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.033302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.033333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.033455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.033487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.033614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.033645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.033826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.033857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.034059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.034091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.034213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.034244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.034421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.034453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.034576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.034608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.034783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.034815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.034946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.034978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.035088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.035119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.036406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.036451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.036643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.036674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.036857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.036890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.037061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.037093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.037224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.037256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.037385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.037418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.037596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.037627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.037812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.037844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.037956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.037982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.038145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.038190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.038370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.038404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.639 [2024-11-18 13:10:03.038533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.639 [2024-11-18 13:10:03.038564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.639 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.038684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.038716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.038960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.038995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.039254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.039294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.039489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.039523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.039699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.039730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.039851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.039878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.039984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.040010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.040181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.040208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.040399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.040428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.040609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.040641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.040752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.040783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.040886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.040918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.041031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.041062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.041241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.041273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.041456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.041489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.041624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.041654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.041926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.041958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.042083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.042115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.042222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.042252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.042373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.042406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.042581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.042611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.042721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.042753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.042936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.042962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.043066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.043092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.043325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.043361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.043458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.043483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.043643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.043669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.043850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.043883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.044066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.044097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.044412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.044486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.044723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.044789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.045004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.045041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.045162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.640 [2024-11-18 13:10:03.045194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.640 qpair failed and we were unable to recover it. 00:27:05.640 [2024-11-18 13:10:03.045397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.045434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.045686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.045719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.045985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.046017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.046140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.046172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.046430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.046466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.046601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.046633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.046751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.046783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.046896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.046929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.047037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.047068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.047244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.047277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.047492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.047525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.047640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.047672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.047788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.047820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.048005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.048037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.048142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.048173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.048293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.048324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.048527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.048560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.048745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.048776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.048883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.048915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.049106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.049137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.049375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.049419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.049558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.049589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.049783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.049815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.049948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.049979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.050093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.050124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.050461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.050493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.050634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.050666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.050856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.050888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.051066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.051097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.051214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.051246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.051377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.051411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.051586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.051618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.051752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.051783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.051962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.051994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.052120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.052151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.052262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.052294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.052413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.052452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.641 [2024-11-18 13:10:03.052567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.641 [2024-11-18 13:10:03.052598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.641 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.052843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.052874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.053001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.053033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.053210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.053241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.053415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.053448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.053623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.053654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.053831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.053862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.053965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.053997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.054211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.054242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.054417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.054450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.054620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.054652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.054776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.054808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.055054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.055085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.055206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.055237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.055412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.055444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.055689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.055720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.055897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.055928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.056106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.056138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.056256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.056287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.056548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.056580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.056688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.056720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.056828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.056860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.056974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.057005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.057196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.057228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.057419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.057450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.057627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.057659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.057837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.057869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.058053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.058084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.058186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.058216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.058477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.058508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.058640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.058671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.058847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.058878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.059140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.059170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.059377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.059409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.059579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.059609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.059799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.059829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.060027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.060059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.060167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.060197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.642 [2024-11-18 13:10:03.060302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.642 [2024-11-18 13:10:03.060333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.642 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.060564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.060601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.060789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.060821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.060998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.061028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.061246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.061279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.061387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.061421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.061602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.061632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.061771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.061802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.061980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.062011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.062220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.062251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.062389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.062422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.062593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.062624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.062816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.062847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.063154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.063185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.063372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.063404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.063532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.063565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.063690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.063721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.063826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.063857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.063989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.064020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.064129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.064160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.064365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.064397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.064531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.064562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.064688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.064719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.064831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.064863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.065085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.065116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.065404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.065438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.065621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.065652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.065910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.065941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.066065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.066096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.066276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.066307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.066491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.066522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.066712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.066744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.066931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.066961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.067137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.067168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.067361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.067394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.067639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.067670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.067781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.067812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.068002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.068033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.643 qpair failed and we were unable to recover it. 00:27:05.643 [2024-11-18 13:10:03.068218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.643 [2024-11-18 13:10:03.068248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.068434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.068466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.068601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.068632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.068879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.068916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.069020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.069051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.069237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.069268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.069376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.069409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.069526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.069557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.069743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.069774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.069906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.069937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.070072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.070103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.070227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.070258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.070391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.070425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.070617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.070648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.070919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.070952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.071145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.071177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.071362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.071394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.071578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.071610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.071864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.071895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.072066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.072097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.072213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.072244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.072484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.072516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.072686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.072716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.072905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.072936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.073066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.073098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.073349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.073394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.073581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.073612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.073847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.073878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.074083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.074114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.074405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.074439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.074665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.074697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.074888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.074919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.075115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.075146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.075317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.075348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.075554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.075586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.075694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.075726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.075906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.075937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.076060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.076091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.644 qpair failed and we were unable to recover it. 00:27:05.644 [2024-11-18 13:10:03.076307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.644 [2024-11-18 13:10:03.076338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.076601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.076632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.076752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.076784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.076963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.076994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.077180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.077211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.077426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.077469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.077644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.077677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.077845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.077876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.078085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.078116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.078400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.078433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.078567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.078598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.078774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.078806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.078988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.079019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.079202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.079233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.079367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.079400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.079593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.079624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.079770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.079802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.079924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.079955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.080167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.080198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.080380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.080414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.080584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.080616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.080879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.080910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.081049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.081081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.081254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.081286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.081475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.081507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.081759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.081791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.081990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.082022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.082144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.082176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.082372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.082404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.082625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.082657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.082918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.082950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.083145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.083175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.083381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.083414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.083554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.645 [2024-11-18 13:10:03.083585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.645 qpair failed and we were unable to recover it. 00:27:05.645 [2024-11-18 13:10:03.083722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.083753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.083870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.083900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.084019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.084050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.084220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.084251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.084490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.084522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.084648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.084680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.084879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.084911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.085156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.085186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.085370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.085403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.085536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.085568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.085741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.085772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.085963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.086001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.086108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.086139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.086337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.086390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.086571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.086602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.086788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.086820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.086939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.086970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.087078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.087109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.087234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.087265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.087390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.087423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.087544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.087575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.087766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.087797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.087965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.087996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.088205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.088236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.088418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.088451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.088639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.088671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.088796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.088827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.089019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.089051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.089168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.089200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.089338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.089378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.089505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.089537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.089733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.089765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.090030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.090062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.090321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.090371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.090597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.090629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.090737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.090768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.090892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.090923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.091095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.646 [2024-11-18 13:10:03.091126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.646 qpair failed and we were unable to recover it. 00:27:05.646 [2024-11-18 13:10:03.091312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.091343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.091597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.091627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.091838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.091870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.092048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.092078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.092338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.092380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.092574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.092606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.092713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.092744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.092867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.092898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.093184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.093215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.093397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.093429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.093570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.093601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.093733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.093765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.093897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.093928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.094116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.094153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.094280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.094311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.094467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.094499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.094737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.094768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.094899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.094931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.095131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.095163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.095343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.095382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.095501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.095532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.095658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.095689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.095871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.095902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.096172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.096203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.096325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.096364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.096470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.096501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.096772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.096802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.096939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.096971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.097094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.097125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.097246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.097276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.097467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.097499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.097754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.097785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.098023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.098054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.098257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.098289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.098424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.098456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.098572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.098603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.098787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.647 [2024-11-18 13:10:03.098818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.647 qpair failed and we were unable to recover it. 00:27:05.647 [2024-11-18 13:10:03.098928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.098959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.099126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.099157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.099422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.099455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.099639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.099670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.099936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.099967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.100146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.100177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.100388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.100420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.100593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.100625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.100821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.100852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.101038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.101069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.101308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.101339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.101567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.101599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.101788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.101819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.101944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.101975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.102097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.102128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.102309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.102340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.102550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.102588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.102719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.102751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.102925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.102955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.103079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.103109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.103302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.103333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.103475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.103507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.103707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.103738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.103846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.103878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.104094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.104124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.104231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.104262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.104434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.104466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.104693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.104724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.104921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.104952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.105141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.105172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.105441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.105473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.105585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.105616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.105921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.105951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.106129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.106160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.106293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.106324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.106528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.106561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.106735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.106765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.106953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.648 [2024-11-18 13:10:03.106984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.648 qpair failed and we were unable to recover it. 00:27:05.648 [2024-11-18 13:10:03.107175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.107206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.107395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.107426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.107596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.107627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.107890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.107921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.108099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.108130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.108275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.108308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.108494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.108528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.108710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.108741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.108979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.109010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.109142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.109173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.109416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.109447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.109632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.109663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.109864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.109896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.110089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.110120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.110250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.110280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.110461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.110515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.110693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.110725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.110864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.110895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.111136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.111173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.111412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.111444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.111624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.111655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.111920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.111951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.112227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.112257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.112523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.112555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.112743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.112774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.113044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.113075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.113259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.113289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.113422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.113454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.113585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.113616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.113809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.113840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.114049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.114080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.114249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.114280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.114430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.114464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.114600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.114630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.114820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.114851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.115050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.115081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.115202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.115232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.115496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.649 [2024-11-18 13:10:03.115529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.649 qpair failed and we were unable to recover it. 00:27:05.649 [2024-11-18 13:10:03.115637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.115668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.115951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.115982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.116177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.116208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.116451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.116482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.116584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.116614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.116747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.116777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.117063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.117094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.117345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.117388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.117579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.117610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.117791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.117821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.118037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.118067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.118177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.118208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.118447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.118479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.118662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.118692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.118819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.118851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.118979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.119010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.119212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.119243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.119523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.119555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.119677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.119708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.119836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.119867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.119986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.120027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.120214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.120245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.120419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.120450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.120647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.120678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.120919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.120950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.121137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.121167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.121287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.121317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.121557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.121589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.121854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.121885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.122081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.122112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.122296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.122326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.122550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.122582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.122687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.122717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.650 qpair failed and we were unable to recover it. 00:27:05.650 [2024-11-18 13:10:03.122889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.650 [2024-11-18 13:10:03.122919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.123112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.123143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.123281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.123311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.123539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.123571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.123809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.123840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.124011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.124041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.124220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.124251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.124376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.124408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.124600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.124631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.124844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.124873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.125142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.125173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.125300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.125331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.125609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.125640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.125822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.125852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.126050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.126082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.126323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.126378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.126563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.126595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.126724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.126755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.126966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.126997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.127186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.127217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.127484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.127516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.127717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.127747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.127888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.127919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.128116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.128146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.128415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.128447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.128701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.128732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.128997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.129029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.129213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.129249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.129433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.129464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.129652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.129683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.129862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.129893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.130103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.130133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.130305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.130336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.130520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.130552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.130794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.130825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.130997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.131028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.131225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.131256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.131429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.131460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.651 qpair failed and we were unable to recover it. 00:27:05.651 [2024-11-18 13:10:03.131634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.651 [2024-11-18 13:10:03.131666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.131865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.131896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.132148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.132178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.132364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.132398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.132611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.132643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.132813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.132844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.133035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.133066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.133234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.133266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.133439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.133472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.133622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.133653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.133822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.133852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.133967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.133998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.134109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.134140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.134243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.134273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.134442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.134474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.134663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.134694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.134943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.134974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.135216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.135247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.135454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.135485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.135674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.135706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.135893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.135923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.136043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.136074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.136269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.136300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.136431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.136462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.136750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.136781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.136905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.136936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.137124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.137155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.137322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.137363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.137541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.137573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.137799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.137835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.137972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.138003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.138199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.138230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.138406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.138440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.138549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.138580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.138753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.138783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.138890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.138922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.139113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.652 [2024-11-18 13:10:03.139143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.652 qpair failed and we were unable to recover it. 00:27:05.652 [2024-11-18 13:10:03.139334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.139376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.139491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.139521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.139625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.139655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.139925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.139956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.140196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.140227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.140482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.140513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.140823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.140855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.141047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.141078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.141202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.141233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.141415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.141447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.141712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.141743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.141932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.141963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.142153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.142191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.142375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.142408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.142579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.142610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.142783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.142815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.143082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.143114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.143297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.143327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.143456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.143488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.143676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.143708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.143998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.144029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.144153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.144184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.144421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.144452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.144646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.144677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.144867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.144898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.145087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.145118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.145373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.145405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.145600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.145631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.145808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.145839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.145966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.145996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.146110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.146141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.146320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.146378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.146641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.146678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.146803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.146833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.147008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.147039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.147253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.147284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.147525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.147557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.653 [2024-11-18 13:10:03.147816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.653 [2024-11-18 13:10:03.147847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.653 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.148106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.148137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.148328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.148365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.148488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.148519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.148627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.148658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.148950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.148981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.149157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.149187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.149376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.149407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.149598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.149629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.149808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.149839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.149966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.149997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.150120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.150150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.150411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.150444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.150693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.150723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.150914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.150945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.151201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.151233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.151471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.151503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.151694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.151725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.151856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.151887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.152004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.152035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.152201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.152233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.152540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.152571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.152756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.152792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.152917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.152948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.153187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.153218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.153402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.153433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.153565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.153597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.153778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.153809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.154004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.154033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.154270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.154301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.154559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.154592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.154822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.154853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.155040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.155070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.155202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.155234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.654 qpair failed and we were unable to recover it. 00:27:05.654 [2024-11-18 13:10:03.155374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.654 [2024-11-18 13:10:03.155406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.155537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.155568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.155810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.155841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.155956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.155987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.156118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.156149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.156286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.156317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.156515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.156547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.156654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.156685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.156866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.156896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.157022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.157052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.157294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.157324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.157527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.157558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.157691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.157723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.157940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.157972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.158169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.158201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.158449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.158483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.158607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.158638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.158826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.158857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.159027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.159058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.159231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.159261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.159451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.159483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.159749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.159781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.160034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.160065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.160167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.160198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.160335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.160375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.160616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.160647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.160816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.160847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.161087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.161118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.161293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.161330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.161466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.161497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.161679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.161710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.161879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.161910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.162164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.162195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.162329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.162366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.162549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.162580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.162754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.162785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.163035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.163066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.163326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.655 [2024-11-18 13:10:03.163365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.655 qpair failed and we were unable to recover it. 00:27:05.655 [2024-11-18 13:10:03.163494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.163525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.163805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.163835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.164077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.164108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.164242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.164272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.164481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.164514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.164723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.164755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.165001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.165033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.165152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.165183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.165370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.165402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.165522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.165554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.165750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.165780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.165973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.166004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.166207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.166238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.166429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.166462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.166707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.166738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.166976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.167007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.167118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.167149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.167348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.167390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.167635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.167666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.167928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.167959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.168158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.168189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.168377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.168411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.168545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.168578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.168822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.168853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.169020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.169051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.169233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.169264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.169398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.169429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.169631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.169661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.169789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.169819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.170002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.170033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.170163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.170199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.170390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.170422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.170613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.170644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.170819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.170849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.170970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.171000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.171176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.171206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.171395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.171426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.171597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.656 [2024-11-18 13:10:03.171628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.656 qpair failed and we were unable to recover it. 00:27:05.656 [2024-11-18 13:10:03.171838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.171869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.171978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.172009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.172197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.172228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.172410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.172442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.172708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.172739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.172951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.172982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.173107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.173138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.173267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.173299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.173449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.173481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.173657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.173688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.173926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.173958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.174145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.174175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.174397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.174431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.174671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.174702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.174835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.174865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.174992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.175022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.175146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.175176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.175364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.175396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.175580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.175611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.175793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.175825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.175998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.176028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.176219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.176250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.176365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.176397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.176515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.176545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.176786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.176816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.177053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.177084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.177253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.177282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.177401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.177434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.177616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.177647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.177814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.177844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.178107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.178138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.178338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.178377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.178591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.178628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.178743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.178773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.178937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.178968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.179163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.179193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.179376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.179408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.179618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.657 [2024-11-18 13:10:03.179648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.657 qpair failed and we were unable to recover it. 00:27:05.657 [2024-11-18 13:10:03.179836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.179867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.180123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.180154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.180394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.180425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.180538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.180568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.180753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.180784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.180960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.180990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.181179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.181210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.181327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.181368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.181576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.181607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.181797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.181828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.182024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.182055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.182172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.182202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.182303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.182333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.182552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.182583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.182791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.182821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.182938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.182968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.183169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.183199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.183420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.183452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.183575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.183606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.183822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.183853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.184044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.184074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.184343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.184384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.184576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.184607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.184798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.184829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.185036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.185066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.185235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.185265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.185440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.185473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.185592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.185623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.185732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.185763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.185954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.185985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.186247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.186278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.186451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.186484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.186721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.186752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.186943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.186975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.187220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.187256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.187386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.187419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.187528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.187559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.658 [2024-11-18 13:10:03.187749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.658 [2024-11-18 13:10:03.187779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.658 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.188065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.188097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.188304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.188335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.188529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.188561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.188751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.188782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.188975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.189006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.189183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.189213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.189490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.189522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.189708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.189739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.189932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.189962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.190156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.190187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.190395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.190428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.190561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.190592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.190855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.190886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.191083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.191114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.191242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.191273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.191534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.191566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.191702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.191733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.191938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.191968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.192207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.192237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.192420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.192451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.192563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.192593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.192721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.192752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.193019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.193050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.193160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.193191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.193471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.193503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.193693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.193725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.193930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.193959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.194144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.194175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.194295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.194326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.194540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.194572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.194743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.194773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.194946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.194977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.195104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.659 [2024-11-18 13:10:03.195135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.659 qpair failed and we were unable to recover it. 00:27:05.659 [2024-11-18 13:10:03.195306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.195336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.195548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.195579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.195794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.195824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.196012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.196050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.196239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.196271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.196442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.196476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.196595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.196626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.196811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.196841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.196958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.196989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.197161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.197191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.197324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.197364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.197609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.197641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.197762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.197792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.198044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.198075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.198204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.198234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.198421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.198453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.198624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.198656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.198854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.198886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.199059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.199091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.199290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.199322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.199571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.199602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.199723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.199754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.199947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.199978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.200115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.200145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.200262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.200292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.200418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.200450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.200626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.200658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.200865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.200895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.201083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.201114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.201380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.201412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.201533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.201564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.201759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.201790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.202052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.202082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.202260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.202291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.202483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.202515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.202654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.202686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.202824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.202854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.660 [2024-11-18 13:10:03.202960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.660 [2024-11-18 13:10:03.202992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.660 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.203112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.203142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.203268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.203299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.203497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.203528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.203630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.203661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.203771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.203802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.203923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.203960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.204153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.204184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.204388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.204420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.204606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.204636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.204878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.204909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.205084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.205114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.205253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.205283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.205466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.205498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.205637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.205668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.205787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.205819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.206058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.206089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.206204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.206235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.206371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.206404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.206515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.206545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.206737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.206769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.206973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.207004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.207219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.207251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.207425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.207457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.207776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.207808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.208006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.208037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.208165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.208196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.208407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.208438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.208700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.208730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.208912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.208943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.209066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.209097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.209394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.209426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.209675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.209706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.209886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.209918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.210095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.210126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.210243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.210274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.210409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.210440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.210612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.210642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.661 [2024-11-18 13:10:03.210767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-18 13:10:03.210797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.661 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.210982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.211012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.211186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.211218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.211344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.211384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.212243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.212287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.212582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.212616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.212829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.212861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.213048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.213079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.213273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.213312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.213522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.213555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.213695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.213729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.213995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.214026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.214150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.214182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.214379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.214411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.214669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.214700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.214837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.214868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.215042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.215073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.215250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.215281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.215482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.215515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.215773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.215804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.215922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.215953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.216138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.216170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.216306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.216337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.216540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.216572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.216773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.216805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.216941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.216971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.217211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.217242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.217440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.217471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.217587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.217619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.217860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.217892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.218017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.218047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.218236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.218267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.218456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.218488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.218682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.218713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.218881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.218912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.219111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.219144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.219257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.219288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.219553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.219586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.662 [2024-11-18 13:10:03.219766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-18 13:10:03.219797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.662 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.219931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.219962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.220161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.220193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.220322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.220362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.220473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.220504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.220693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.220724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.220919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.220951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.221123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.221153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.221336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.221377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.221555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.221586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.221857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.221894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.222064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.222095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.222375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.222409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.222535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.222566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.222768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.222799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.222930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.222961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.223218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.223249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.223488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.223520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.223782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.223813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.224007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.224037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.224276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.224307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.224554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.224586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.224707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.224739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.224862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.224892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.225038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.225069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.225206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.225236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.225422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.225454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.225645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.225676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.225844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.225875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.226056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.226087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.226257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.226288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.226459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.226491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.226784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.226816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.227088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.227118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.663 [2024-11-18 13:10:03.227317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-18 13:10:03.227348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.663 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.227543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.227575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.227846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.227877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.228074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.228106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.228391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.228424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.228689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.228720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.228852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.228883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.229151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.229182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.229430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.229462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.229665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.229696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.229827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.229858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.230126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.230158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.230335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.230376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.230550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.230581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.230832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.230864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.231004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.231035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.231224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.231262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.231530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.231563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.231821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.231852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.232045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.232076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.232341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.232381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.232516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.232547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.232804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.232834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.233122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.233153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.233431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.233463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.233729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.233760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.233996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.234027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.234291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.234322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.234507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.234537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.234706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.234737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.234935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.234967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.235143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.235173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.235349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.235407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.235590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.235621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.235896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.235926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.236196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.236227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.236494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.236527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.236742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.664 [2024-11-18 13:10:03.236773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.664 qpair failed and we were unable to recover it. 00:27:05.664 [2024-11-18 13:10:03.236967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.236997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.237117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.237148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.237402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.237434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.237649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.237679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.237869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.237901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.238028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.238060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.238271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.238302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.238412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.238444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.238628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.238660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.238850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.238881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.239115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.239147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.239343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.239400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.239641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.239671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.239875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.239906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.240152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.240183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.240362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.240394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.240516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.240547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.240737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.240768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.240957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.240994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.241164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.241195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.241375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.241408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.241523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.241553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.241744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.241775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.242029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.242060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.242368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.242401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.242569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.242601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.242732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.242763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.243053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.243084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.243297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.243328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.243596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.243628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.243874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.243904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.244095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.244126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.244301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.244332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.244602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.244634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.244808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.244839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.245026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.245056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.245336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.245377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.245617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.665 [2024-11-18 13:10:03.245648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.665 qpair failed and we were unable to recover it. 00:27:05.665 [2024-11-18 13:10:03.245765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.245795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.245965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.245996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.246282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.246312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.246506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.246538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.246784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.246816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.247076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.247107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.247398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.247431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.247649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.247680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.247875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.247905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.248105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.248137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.248321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.248361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.248623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.248653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.248946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.248977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.249209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.249239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.249493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.249525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.249785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.249816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.250109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.250140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.250409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.250441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.250635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.250666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.250924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.250955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.251205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.251242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.251449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.251482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.251666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.251697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.251961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.251992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.252192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.252222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.252502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.252534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.252773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.252804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.253090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.253120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.253399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.253431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.253620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.253651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.253905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.253935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.254223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.254254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.254520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.254552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.254723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.254754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.255046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.255078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.255391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.255425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.255562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.255594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.255883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.666 [2024-11-18 13:10:03.255913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.666 qpair failed and we were unable to recover it. 00:27:05.666 [2024-11-18 13:10:03.256175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.256206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.256488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.256521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.256703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.256733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.257001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.257032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.257224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.257256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.257515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.257547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.257749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.257780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.257972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.258003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.258184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.258214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.258459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.258492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.258782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.258813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.259075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.259106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.259305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.259335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.259548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.259579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.259754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.259785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.259994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.260024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.260293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.260323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.260615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.260647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.260865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.260894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.261027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.261058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.261293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.261323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.261475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.261507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.261708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.261744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.262021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.262053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.262323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.262365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.262500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.262531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.262779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.262810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.263047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.263079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.263197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.263227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2483174 Killed "${NVMF_APP[@]}" "$@" 00:27:05.667 [2024-11-18 13:10:03.263433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.263467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.263732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.263764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.263954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.263985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:05.667 [2024-11-18 13:10:03.264164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.264197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.264382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.264414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 [2024-11-18 13:10:03.264636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.264674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.667 [2024-11-18 13:10:03.264893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.667 [2024-11-18 13:10:03.264925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.667 qpair failed and we were unable to recover it. 00:27:05.667 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:05.667 [2024-11-18 13:10:03.265190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.265221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.668 [2024-11-18 13:10:03.265518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.265551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.265743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.265774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.266035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.266067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.266347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.266389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.266598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.266629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.266884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.266915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.267210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.267241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.267508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.267541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.267825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.267856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.268001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.268033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.268285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.268319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.268525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.268556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.268731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.268761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.268949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.268980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.269178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.269209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.269473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.269506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.269786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.269818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.270083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.270113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.270243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.270274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.270508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.270540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.270805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.270836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.271030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.271061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.271320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.271362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.271499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.271531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.271794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.271826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.272030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.272061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 [2024-11-18 13:10:03.272250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.272281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.668 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2483894 00:27:05.668 [2024-11-18 13:10:03.272448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.668 [2024-11-18 13:10:03.272483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.668 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.272749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.272781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2483894 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:05.669 [2024-11-18 13:10:03.272984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.273016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2483894 ']' 00:27:05.669 [2024-11-18 13:10:03.273217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.273250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.273519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.669 [2024-11-18 13:10:03.273551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.273786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:05.669 [2024-11-18 13:10:03.273818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.273925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.273968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.669 [2024-11-18 13:10:03.274146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.274180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:05.669 [2024-11-18 13:10:03.274444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.274479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.274670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.669 [2024-11-18 13:10:03.274703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.274893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.274925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.275206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.275237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.275381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.275414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.275542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.275573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.275774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.275806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.276018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.276052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.276237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.276268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.276489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.276523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.276768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.276800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.276982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.277013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.277187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.277219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.277360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.277393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.277660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.277693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.277821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.277853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.278120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.278151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.278268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.278299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.278583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.278617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.278798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.278830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.278934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.278965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.279237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.279268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.279471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.279503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.279827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.279860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.280127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.280157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.280348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.669 [2024-11-18 13:10:03.280392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.669 qpair failed and we were unable to recover it. 00:27:05.669 [2024-11-18 13:10:03.280580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.280611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.280881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.280914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.281192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.281224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.281424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.281457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.281748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.281779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.282075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.282107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.282297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.282328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.282579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.282610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.282814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.282846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.283024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.283056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.283196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.283233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.283450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.283481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.283726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.283758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.284045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.284077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.284378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.284411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.284688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.284720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.284928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.284959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.285159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.285190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.285389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.285422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.285560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.285592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.285812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.285843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.286054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.286085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.286279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.286310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.286556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.286588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.286784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.286815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.286951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.286982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.287225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.287255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.287442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.287474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.287690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.287721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.287994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.288025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.288310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.288341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.288646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.288677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.288809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.288840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.289154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.289185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.289384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.289417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.289532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.289564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.289770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.289800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.290079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.670 [2024-11-18 13:10:03.290112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-11-18 13:10:03.290231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.290262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.290536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.290569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.290766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.290798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.291070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.291101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.291234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.291266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.291476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.291508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.291706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.291738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.291890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.291922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.292111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.292142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.292315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.292347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.292593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.292625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.292898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.292930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.293170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.293201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.293401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.293434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.293630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.293661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.293908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.293939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.294227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.294258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.294396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.294428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.294551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.294581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.294802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.294833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.295023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.295054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.295324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.295364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.295646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.295678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.295951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.295983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.296236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.296267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.296411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.296443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.296651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.296683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.296896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.296926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.297190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.297221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.297343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.297389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.297520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.297551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.297812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.297842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.298044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.298074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.298256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.298287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.298487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.298519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.298769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.298800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.299003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.299035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-11-18 13:10:03.299305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.671 [2024-11-18 13:10:03.299336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.299662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.299694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.299887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.299924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.300171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.300201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.300387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.300419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.300692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.300723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.300994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.301026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.301340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.301381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.301596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.301627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.301764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.301794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.301989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.302020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.302324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.302383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.302503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.302534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.302661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.302692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.302839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.302870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.302994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.303025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.303153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.303184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.303302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.303334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.303613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.303646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.303822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.303852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.303967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.303998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.304196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.304227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.304503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.304536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.304752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.304783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.304966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.304997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.305175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.305206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.305394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.305427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.305547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.305578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.305696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.305726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.305941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.305972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.306154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.306184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.306375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.306407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.306674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.306705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.306840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.306871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.307148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.307179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-11-18 13:10:03.307310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.672 [2024-11-18 13:10:03.307340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.307594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.307627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.307758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.307790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.307924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.307955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.308067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.308099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.308239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.308269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.308459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.308493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.308618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.308656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.308781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.308812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.309066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.309097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.309232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.309262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.309538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.309570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.309814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.309845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.310067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.310098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.310317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.310348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.310571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.310602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.310733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.310764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.310893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.310925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.311108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.311140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.311318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.311349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.311557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.311588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.311781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.311813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.312057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.312088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.312297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.312328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.312544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.312576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.312762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.312793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.313059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.313090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.313288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.313319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.313599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.313632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.313809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.313839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.313960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.313992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.314118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.314149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.314393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.314425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.314618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.314649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.314920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.314952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.315216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.315246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.953 [2024-11-18 13:10:03.315487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.953 [2024-11-18 13:10:03.315519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.953 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.315695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.315725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.315915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.315946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.316153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.316183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.316300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.316331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.316588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.316620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.316743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.316774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.316974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.317004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.317126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.317158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.317288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.317319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.317434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.317466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.317742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.317778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.317951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.317982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.318182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.318212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.318456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.318509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.318712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.318743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.318932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.318962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.319210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.319241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.319429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.319461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.319590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.319620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.319884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.319915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.320116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.320149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.320334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.320383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.320576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.320608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.320848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.320879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.320996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.321027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.321212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.321243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.321511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.321544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.321673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.321703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.321894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.321925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.322052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.322083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.322281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.322313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.322527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.322560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.322569] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:27:05.954 [2024-11-18 13:10:03.322627] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.954 [2024-11-18 13:10:03.322674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.322705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.322912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.322941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.323146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.323176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.323424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.323455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.954 qpair failed and we were unable to recover it. 00:27:05.954 [2024-11-18 13:10:03.323753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.954 [2024-11-18 13:10:03.323786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.323898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.323931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.324065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.324096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.324230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.324262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.324457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.324490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.324679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.324711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.324957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.324990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.325177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.325209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.325470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.325503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.325708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.325740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.326003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.326037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.326310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.326342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.326567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.326601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.326804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.326838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.326952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.326985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.327188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.327220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.327414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.327447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.327646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.327678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.327857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.327891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.328161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.328194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.328392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.328427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.328683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.328716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.328991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.329024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.329145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.329179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.329304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.329335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.329451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.329483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.329626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.329665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.329949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.329981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.330112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.330144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.330401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.330435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.330628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.330660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.330868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.330901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.331025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.331057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.331244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.331277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.331471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.331505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.331753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.331784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.332030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.332062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.955 qpair failed and we were unable to recover it. 00:27:05.955 [2024-11-18 13:10:03.332331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.955 [2024-11-18 13:10:03.332372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.332591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.332623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.332836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.332869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.332993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.333026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.333200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.333232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.333425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.333459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.333729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.333761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.333976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.334007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.334119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.334152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.334264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.334298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.334515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.334547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.334675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.334707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.334923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.334955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.335138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.335171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.335275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.335306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.335448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.335483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.335753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.335786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.335916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.335949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.336147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.336181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.336395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.336430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.336543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.336576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.336696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.336727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.336839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.336870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.337048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.337082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.337348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.337389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.337633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.337666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.337779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.337811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.338046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.338079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.338216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.338249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.338445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.338487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.338681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.338714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.338923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.338955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.339066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.339098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.339201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.339233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.339369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.339404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.339512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.339545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.339728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.339760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.339942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.339975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.956 qpair failed and we were unable to recover it. 00:27:05.956 [2024-11-18 13:10:03.340107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.956 [2024-11-18 13:10:03.340139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.340338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.340405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.340650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.340682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.340873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.340904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.341030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.341062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.341367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.341402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.341665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.341696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.341820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.341853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.342038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.342071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.342259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.342291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.342489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.342523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.342709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.342741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.342931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.342963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.343177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.343209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.343314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.343347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.343606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.343638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.343848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.343880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.344056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.344088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.344218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.344251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.344384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.344417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.344533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.344565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.344788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.344821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.345086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.345118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.345325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.345367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.345576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.345609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.345797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.345829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.346109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.346142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.346263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.346294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.346428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.346462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.346597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.346629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.346821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.346854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.346985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.347024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.957 [2024-11-18 13:10:03.347214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.957 [2024-11-18 13:10:03.347246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.957 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.347523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.347558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.347829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.347861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.347978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.348010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.348224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.348257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.348475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.348509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.348616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.348649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.348868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.348900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.349094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.349126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.349258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.349290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.349490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.349523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.349695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.349728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.349919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.349952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.350174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.350206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.350447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.350481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.350779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.350827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.350972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.351003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.351213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.351246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.351432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.351466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.351609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.351641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.351867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.351899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.352016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.352050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.352291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.352324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.352461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.352502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.352689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.352722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.352988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.353020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.353200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.353233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.353436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.353470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.353584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.353615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.353795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.353827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.354106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.354138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.354253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.354285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.354453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.354486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.354729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.354762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.354881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.354912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.355176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.355209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.355394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.355428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.355672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.355704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.958 qpair failed and we were unable to recover it. 00:27:05.958 [2024-11-18 13:10:03.355972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.958 [2024-11-18 13:10:03.356005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.356206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.356244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.356516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.356549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.356691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.356723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.356931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.356962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.357164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.357196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.357394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.357428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.357631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.357662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.357790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.357823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.358008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.358039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.358234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.358267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.358450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.358484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.358593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.358626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.358753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.358787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.358994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.359025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.359242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.359274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.359494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.359529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.359654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.359685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.359877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.359909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.360013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.360045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.360284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.360316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.360600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.360639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.360776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.360808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.361023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.361055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.361333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.361373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.361558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.361590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.361858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.361890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.362004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.362036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.362155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.362188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.362379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.362413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.362600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.362631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.362817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.362849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.363024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.363055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.363233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.363265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.363508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.363541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.363755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.363787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.363995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.364027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.364290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.959 [2024-11-18 13:10:03.364320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.959 qpair failed and we were unable to recover it. 00:27:05.959 [2024-11-18 13:10:03.364589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.364623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.364884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.364915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.365029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.365062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.365258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.365292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.365436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.365470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.365653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.365684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.365800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.365832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.366017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.366048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.366264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.366297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.366477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.366510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.366614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.366647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.366821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.366853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.366991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.367024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.367142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.367173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.367372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.367407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.367603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.367635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.367825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.367857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.368058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.368091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.368198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.368230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.368424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.368458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.368646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.368677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.368803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.368834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.369097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.369129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.369328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.369370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.369491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.369523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.369694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.369726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.369923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.369955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.370225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.370256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.370439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.370471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.370753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.370785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.370972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.371011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.371216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.371248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.371418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.371452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.371563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.371595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.371836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.371867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.371985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.372017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.372208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.372240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.372403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.960 [2024-11-18 13:10:03.372436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.960 qpair failed and we were unable to recover it. 00:27:05.960 [2024-11-18 13:10:03.372542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.372574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.372691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.372723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.372853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.372886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.373065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.373097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.373274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.373306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.373556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.373590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.373795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.373828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.374080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.374112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.374239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.374269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.374471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.374505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.374627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.374659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.374830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.374862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.375045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.375076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.375318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.375361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.375485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.375517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.375685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.375717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.375827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.375860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.376031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.376064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.376265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.376296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.376435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.376470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.376594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.376626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.376764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.376796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.376978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.377009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.377121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.377153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.377402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.377435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.377555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.377588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.377806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.377839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.377961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.377992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.378112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.378144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.378316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.378348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.378493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.378525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.378643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.378675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.378782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.378820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.378964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.378996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.379114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.379147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.379388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.379421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.379634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.379666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.379910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.961 [2024-11-18 13:10:03.379942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.961 qpair failed and we were unable to recover it. 00:27:05.961 [2024-11-18 13:10:03.380186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.380217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.380345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.380387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.380517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.380548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.380800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.380831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.381085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.381117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.381237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.381269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.381440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.381473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.381662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.381694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.381872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.381905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.382035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.382068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.382239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.382272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.382469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.382504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.382784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.382816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.382944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.382976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.383102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.383134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.383261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.383293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.383485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.383518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.383641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.383673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.383846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.383878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.384071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.384103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.384281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.384314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.384577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.384651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.384866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.384902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.385094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.385127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.385254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.385287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.385463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.385498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.385685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.385719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.385891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.385923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.386184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.386217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.386482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.386516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.386703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.386736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.387024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.387057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.387227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.387259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.962 qpair failed and we were unable to recover it. 00:27:05.962 [2024-11-18 13:10:03.387376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.962 [2024-11-18 13:10:03.387410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.387511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.387552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.387742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.387775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.387889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.387921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.388043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.388075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.388246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.388278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.388407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.388441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.388681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.388714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.388890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.388922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.389198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.389229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.389407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.389442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.389685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.389719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.389892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.389923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.390162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.390195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.390430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.390465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.390679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.390711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.390836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.390869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.391111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.391144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.391285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.391318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.391527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.391561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.391747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.391780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.391952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.391983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.392114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.392147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.392317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.392350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.392484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.392516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.392713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.392745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.392885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.392917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.393135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.393166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.393483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.393517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.393697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.393730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.393993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.394025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.394318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.394350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.394633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.394666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.394797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.394829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.395005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.395038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.395159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.395192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.395433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.395467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.395706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.963 [2024-11-18 13:10:03.395739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.963 qpair failed and we were unable to recover it. 00:27:05.963 [2024-11-18 13:10:03.395863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.395895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.396103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.396134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.396392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.396425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.396596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.396635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.396845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.396878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.397055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.397088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.397193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.397226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.397417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.397451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.397635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.397669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.397787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.397820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.398011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.398043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.398223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.398256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.398441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.398475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.398601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.398633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.398836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.398868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.398993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.399026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.399151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.399183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.399365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.399400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.399579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.399611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.399783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.399816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.399928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.399960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.400216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.400248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.400418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.400451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.400716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.400750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.400940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.400972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.401166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.401199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.401395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.401429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.401671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.401704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.401893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.401926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.402206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.402239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.402448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.402482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.402653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.402686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.402864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.402898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.403079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.403111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.403302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.403334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.403462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.403495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.403681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.403713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.964 [2024-11-18 13:10:03.403914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.964 [2024-11-18 13:10:03.403947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.964 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.404134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.404167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.404288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.404321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.404555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.404627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.404694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74daf0 (9): Bad file descriptor 00:27:05.965 [2024-11-18 13:10:03.404927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.404966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.405165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.405198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.405310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.405342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.405517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.965 [2024-11-18 13:10:03.405539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.405572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.405785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.405818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.405996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.406028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.406148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.406180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.406365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.406400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.406527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.406559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.406743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.406775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.407038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.407071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.407184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.407216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.407339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.407382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.407572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.407605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.407712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.407744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.407920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.407953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.408127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.408159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.408421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.408454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.408606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.408639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.408942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.408975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.409241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.409274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.409489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.409522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.409650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.409683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.409866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.409899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.410097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.410129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.410271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.410304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.410437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.410472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.410735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.410768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.410961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.411000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.411125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.411157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.411371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.411408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.411588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.411620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.411792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.411826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.965 [2024-11-18 13:10:03.412017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.965 [2024-11-18 13:10:03.412050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.965 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.412163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.412195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.412330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.412374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.412560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.412593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.412767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.412799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.413001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.413035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.413278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.413311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.413502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.413536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.413800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.413834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.414060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.414094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.414310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.414345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.414556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.414589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.414708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.414742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.414859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.414892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.415076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.415111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.415283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.415318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.415513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.415589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.415800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.415843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.415979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.416013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.416241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.416273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.416405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.416439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.416549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.416582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.416766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.416798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.416970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.417003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.417124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.417157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.417418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.417453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.417645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.417678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.417856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.417889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.418067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.418100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.418268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.418301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.418575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.418609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.418893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.418925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.419196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.419229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.419424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.419458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.966 [2024-11-18 13:10:03.419581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.966 [2024-11-18 13:10:03.419613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.966 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.419797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.419836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.419972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.420006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.420115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.420147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.420321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.420362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.420485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.420519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.420696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.420729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.420839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.420872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.421010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.421043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.421287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.421320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.421539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.421585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.421781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.421815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.421933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.421967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.422153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.422187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.422400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.422437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.422652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.422686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.422877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.422910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.423028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.423061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.423299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.423333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.423523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.423558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.423820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.423853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.424057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.424090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.424285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.424318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.424504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.424538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.424645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.424679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.424864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.424897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.425149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.425182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.425373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.425406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.425647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.425687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.425865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.425899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.426025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.426057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.426271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.426306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.426556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.426590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.426714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.426748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.426986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.427019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.427192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.427224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.427509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.427545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.427732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.427765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.967 qpair failed and we were unable to recover it. 00:27:05.967 [2024-11-18 13:10:03.427956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.967 [2024-11-18 13:10:03.427990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.428175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.428208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.428329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.428371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.428501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.428533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.428808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.428841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.429108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.429141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.429331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.429371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.429485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.429518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.429759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.429792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.429999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.430033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.430230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.430263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.430386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.430420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.430639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.430672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.430792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.430825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.431018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.431050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.431241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.431274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.431487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.431521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.431693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.431731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.431858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.431891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.432069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.432102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.432275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.432307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.432559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.432594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.432781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.432815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.432994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.433027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.433200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.433232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.433471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.433505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.433635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.433667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.433794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.433826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.434113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.434147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.434386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.434420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.434628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.434661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.434933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.434970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.435240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.435273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.435414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.435447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.435631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.435664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.435867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.435901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.436038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.436070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.968 [2024-11-18 13:10:03.436246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.968 [2024-11-18 13:10:03.436277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.968 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.436463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.436496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.436621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.436653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.436893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.436925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.437102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.437134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.437303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.437336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.437608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.437641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.437857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.437897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.438067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.438099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.438339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.438379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.438518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.438551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.438738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.438771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.438964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.438996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.439187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.439219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.439394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.439427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.439543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.439576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.439845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.439877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.440069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.440102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.440285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.440317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.440543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.440576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.440780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.440812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.441036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.441069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.441309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.441340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.441537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.441571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.441841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.441874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.442050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.442082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.442189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.442221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.442415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.442449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.442632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.442664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.442878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.442911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.443024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.443056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.443243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.443275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.443465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.443499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.443741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.443775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.443950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.443989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.444174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.444207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.969 qpair failed and we were unable to recover it. 00:27:05.969 [2024-11-18 13:10:03.444405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.969 [2024-11-18 13:10:03.444442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.444710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.444742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.444914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.444946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.445208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.445242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.445460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.445497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.445749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.445784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.445975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.446010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.446251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.446284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.446421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.446454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.446577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.446611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.446746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.446750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.970 [2024-11-18 13:10:03.446777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.970 [2024-11-18 13:10:03.446779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b9[2024-11-18 13:10:03.446785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the0 with addr=10.0.0.2, port=4420 00:27:05.970 only 00:27:05.970 [2024-11-18 13:10:03.446806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.970 [2024-11-18 13:10:03.446813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.446964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.446996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.447246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.447278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.447541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.447575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.447750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.447783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.448003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.448036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.448163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.448197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.448406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.448443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.448476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:05.970 [2024-11-18 13:10:03.448571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.448605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.448584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:05.970 [2024-11-18 13:10:03.448690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:05.970 [2024-11-18 13:10:03.448782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.448813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b9[2024-11-18 13:10:03.448691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:05.970 0 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.448938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.448971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.449215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.449247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.449498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.449533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.449718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.449752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.449876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.449908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.450115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.450148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.450324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.450366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.450475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.450508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.450625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.450657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.450923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.450955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.451148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.451181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.451302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.451335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.451586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.451620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.451762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.451794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.970 [2024-11-18 13:10:03.452004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.970 [2024-11-18 13:10:03.452037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.970 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.452221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.452255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.452536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.452571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.452770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.452803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.452996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.453029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.453217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.453249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.453422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.453455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.453644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.453676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.453795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.453828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.454019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.454051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.454238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.454271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.454443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.454476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.454652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.454684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.454925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.454958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.455080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.455119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.455369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.455404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.455531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.455564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.455736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.455768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.455899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.455932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.456052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.456085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.456271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.456303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.456429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.456461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.456651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.456684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.456808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.456841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.457017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.457049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.457289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.457321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.457551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.457594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.457780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.457814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.458024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.458059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.458301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.458334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.458522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.458556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.458728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.458761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.971 qpair failed and we were unable to recover it. 00:27:05.971 [2024-11-18 13:10:03.459024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.971 [2024-11-18 13:10:03.459057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.459276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.459309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.459523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.459558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.459697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.459729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.459909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.459942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.460135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.460170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.460412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.460449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.460650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.460686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.460886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.460920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.461054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.461096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.461290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.461326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.461523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.461558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.461731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.461765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.462036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.462072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.462245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.462277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.462530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.462565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.462772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.462807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.462987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.463021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.463213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.463247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.463418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.463454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.463698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.463732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.463903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.463937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.464160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.464193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.464443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.464478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.464749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.464781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.465063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.465098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.465420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.465456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.465629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.465662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.465845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.465879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.466121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.466154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.466329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.466370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.466492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.466525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.466712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.466746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.466867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.466901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.467143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.467176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.467362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.467398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.467649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.467689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.467878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.467911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.468123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.972 [2024-11-18 13:10:03.468157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.972 qpair failed and we were unable to recover it. 00:27:05.972 [2024-11-18 13:10:03.468300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.468333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.468479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.468513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.468630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.468663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.468845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.468878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.469071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.469104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.469319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.469361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.469517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.469550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.469722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.469756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.470027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.470061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.470238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.470271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.470418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.470452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.470687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.470751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.470961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.471006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.471308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.471360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.471560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.471592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.471832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.471864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.472102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.472135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.472359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.472393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.472646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.472678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.472869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.472902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.473066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.473098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.473339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.473382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.473570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.473601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.473840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.473872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.474044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.474091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.474269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.474302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.474508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.474543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.474649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.474681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.474787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.474819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.474947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.474980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.475160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.475191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.475455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.475489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.475702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.475736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.475906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.475938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.476218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.476250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.476425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.476459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.476749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.476781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.973 [2024-11-18 13:10:03.476974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.973 [2024-11-18 13:10:03.477006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.973 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.477137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.477169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.477290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.477322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.477605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.477646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.477805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.477837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.478103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.478135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.478305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.478338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.478526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.478558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.478844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.478877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.479128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.479160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.479423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.479457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.479660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.479692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.479939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.479971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.480263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.480294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.480559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.480602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.480906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.480942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.481065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.481096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.481364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.481398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.481668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.481700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.481894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.481927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.482124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.482156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.482421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.482454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.482740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.482774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.483044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.483075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.483288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.483321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.483634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.483671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.483946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.483980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.484198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.484231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.484425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.484461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.484652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.484686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.484894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.484929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.485193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.485226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.485467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.485503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.485765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.485797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.486062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.486095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.486338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.486383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.486574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.486607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.486775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.486808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.974 [2024-11-18 13:10:03.487069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.974 [2024-11-18 13:10:03.487101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.974 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.487301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.487334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.487565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.487598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.487909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.487948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.488141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.488174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.488370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.488404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.488578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.488611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.488899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.488933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.489117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.489150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.489332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.489373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.489640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.489675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.489953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.489987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.490261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.490294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.490559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.490594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.490788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.490822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.491009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.491044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.491324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.491375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.491661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.491695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.491967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.492001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.492274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.492308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.492510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.492547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.492769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.492803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.492930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.492963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.493232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.493267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.493551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.493587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.493767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.493800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.494082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.494117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.494412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.494448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.494705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.494738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.495020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.495054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.495320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.495363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.495640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.495673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.495943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.495976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.496180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.496215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.496479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.496514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.496777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.496810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.496932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.496965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.497152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.497186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.975 [2024-11-18 13:10:03.497468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.975 [2024-11-18 13:10:03.497504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.975 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.497804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.497839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.498043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.498077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.498322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.498363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.498613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.498648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.498846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.498902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.499029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.499062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.499270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.499302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.499575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.499609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.499748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.499782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.500043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.500077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.500296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.500329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.500596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.500630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.500740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.500773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.500959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.500992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.501250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.501284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.501459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.501491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.501746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.501779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.501962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.501994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.502273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.502306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.502460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.502493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.502685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.502717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.503003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.503036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.503162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.503196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.503378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.503413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.503683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.503721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.503945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.503985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.504256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.504292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.504583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.504619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.504881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.504916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.505092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.505125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.505373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.505411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.505615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.505651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.505839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.976 [2024-11-18 13:10:03.505873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.976 qpair failed and we were unable to recover it. 00:27:05.976 [2024-11-18 13:10:03.506054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.506087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.506342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.506392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.506532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.506565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.506816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.506848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.507127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.507158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.507403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.507439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.507660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.507692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.507864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.507897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.508162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.508196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.508315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.508347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.508481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.508514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.508652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.508691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.508898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.508931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.509102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.509134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.509305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.509337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.509500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.509534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.509774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.509806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.510000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.510032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.510244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.510277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.510404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.510437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.510613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.510646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.510831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.510863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.511103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.511135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.511329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.511373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.511544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.511577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.511765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.511798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.511976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.512007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.512127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.512159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.512334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.512377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.512626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.512659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.512921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.512953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.513056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.513088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.513197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.513229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.513544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.513579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.513703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.513735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.514003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.514036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.514307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.514340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.514533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.977 [2024-11-18 13:10:03.514566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.977 qpair failed and we were unable to recover it. 00:27:05.977 [2024-11-18 13:10:03.514764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.514798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.514979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.515012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.515256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.515288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.515460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.515494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.515758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.515791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.516079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.516112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.516293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.516326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.516623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.516655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.516846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.516878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.517089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.517122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.517326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.517367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.517551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.517583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.517751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.517783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.517975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.518013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.518285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.518317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.518522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.518555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.518729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.518761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.519012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.519044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.519234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.519266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.519457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.519491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.519754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.519785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.520026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.520058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.520327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.520367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.520506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.520538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.520825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.520857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.521057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.521089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.521365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.521400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.521583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.521616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.521789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.521821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.522061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.522093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.522381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.522414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.522658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.522691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.522886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.522918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.523116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.523148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.523414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.523448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.523712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.523743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.524032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.524064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.524193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.978 [2024-11-18 13:10:03.524225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.978 qpair failed and we were unable to recover it. 00:27:05.978 [2024-11-18 13:10:03.524401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.524434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.524691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.524722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.524943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.524976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.525264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.525296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.525516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.525550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.525752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.525785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.525961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.525992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.526254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.526286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.526525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.526558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.526770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.526802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.526997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.527029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.527287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.527319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.527547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.527600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.527872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.527905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.528166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.528198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.528387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.528428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.528697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.528729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.528999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.529032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.529242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.529274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.529450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.529484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.529750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.529783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.529958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.529990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.530230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.530262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.530551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.530585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.530853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.530885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.531098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.531131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.531420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.531453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.531637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.531669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.531795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.531827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.532038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.532071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.532335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.532378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.532567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.532600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.532839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.532872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.533004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.533036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.533277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.533310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.533509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.533542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.533718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.533749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.979 [2024-11-18 13:10:03.533942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.979 [2024-11-18 13:10:03.533974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.979 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.534147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.534180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.534384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.534417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.534587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.534618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.534860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.534892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.535126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.535158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.535361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.535396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.535623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.535655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.535947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.535980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.536085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.536117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.536382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.536415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.536678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.536710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.536925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.536957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.537080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.537112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.537288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.537320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.537454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.537488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.537625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.537657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.537779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.537811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.538031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.538070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.538381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.538414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.538695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.538727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.538900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.538933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.539198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.539229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.539470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.539503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.539772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.539804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.540041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.540073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.540326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.540366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.540537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.540570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.540809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.540841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.541129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.541161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.541350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.541391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.541566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.541599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.541789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.541822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.542075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.980 [2024-11-18 13:10:03.542107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.980 qpair failed and we were unable to recover it. 00:27:05.980 [2024-11-18 13:10:03.542367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.542402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.542576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.542608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.542898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.542930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.543116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.543148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.543377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.543412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.543698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.543731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.543908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.543939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.544106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.544138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.544265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.544298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.544511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.544544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.544808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.544840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.545053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.545086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.545325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.545366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.545566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.545598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.545863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.545897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.546183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.546215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.546457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.546491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.546795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.546828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.547117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.547149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.547277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.547309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.547575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.547609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.547866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.547899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.548109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.548141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.548310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.548343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.548476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.548514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.548763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.548795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.548966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.548998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.549171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.549203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.549402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.549436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.549700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.549732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.549852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.549885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.550155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.550188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.550395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.550429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.550656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.550689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.550864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.550895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.551156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.551189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.551429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.551463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.981 [2024-11-18 13:10:03.551632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.981 [2024-11-18 13:10:03.551665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.981 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.551873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.551906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.552118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.552150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.552410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.552442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.552744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.552777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.553037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.553069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.553333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.553376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:05.982 [2024-11-18 13:10:03.553584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.553617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:27:05.982 [2024-11-18 13:10:03.553791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.553824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.553944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.553976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:05.982 [2024-11-18 13:10:03.554217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.554251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:05.982 [2024-11-18 13:10:03.554512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.554547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.982 [2024-11-18 13:10:03.554719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.554754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.554979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.555012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.555145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.555178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.555300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.555332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.555653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.555686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.555928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.555960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.556156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.556189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.556382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.556417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.556660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.556693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.556877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.556911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.557019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.557052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.557296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.557328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.557503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.557538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.557722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.557760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.558025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.558058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.558235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.558268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.558459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.558495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.558758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.558790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.558913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.558946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.559057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.559089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.559280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.559312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.559512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.559544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.559727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.559759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.559894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.559926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.982 qpair failed and we were unable to recover it. 00:27:05.982 [2024-11-18 13:10:03.560051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.982 [2024-11-18 13:10:03.560083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.560362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.560395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.560571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.560603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.560880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.560914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.561029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.561063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.561185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.561216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.561393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.561427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.561603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.561635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.561759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.561792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.561969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.562001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.562271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.562304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.562503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.562536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.562727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.562759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.562950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.562982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.563104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.563136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.563344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.563386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.563638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.563671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.563801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.563833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.563943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.563975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.564240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.564273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.564457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.564492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.564692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.564725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.564862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.564895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.565082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.565114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.565222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.565255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.565381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.565414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.565684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.565717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.565893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.565926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.566138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.566171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.566269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.566307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.566444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.566478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.566672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.566704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.566888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.566919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.567125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.567157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.567327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.567367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.567546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.567579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.567687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.567720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.567917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.567950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.983 qpair failed and we were unable to recover it. 00:27:05.983 [2024-11-18 13:10:03.568149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.983 [2024-11-18 13:10:03.568181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.568394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.568428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.568548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.568580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.568704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.568736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.568907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.568939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.569141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.569174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.569295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.569328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.569526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.569559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.569759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.569791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.569961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.569993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.570160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.570193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.570372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.570405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.570588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.570620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.570791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.570823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.571120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.571152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.571424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.571457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.571650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.571682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.571921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.571954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.572145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.572178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.572363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.572397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.572532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.572564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.572691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.572723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.572897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.572930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.573110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.573142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.573387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.573421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.573610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.573643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.573786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.573819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.573944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.573976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.574094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.574127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.574257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.574288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.574493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.574526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.574635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.574673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.574840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.574872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.574983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.575016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.575279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.575311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.575512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.575545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.575731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.575763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.575906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.984 [2024-11-18 13:10:03.575938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.984 qpair failed and we were unable to recover it. 00:27:05.984 [2024-11-18 13:10:03.576176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.576209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.576329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.576389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.576571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.576603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.576721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.576753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.576856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.576887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.577085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.577117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.577287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.577319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.577580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.577614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.577827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.577863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.578067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.578099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.578273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.578306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.578523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.578556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.578739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.578772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.578915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.578949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.579219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.579252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.579431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.579465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.579574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.579606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.579773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.579805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.580165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.580197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.580322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.580364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.580566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.580599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.580863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.580895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.581044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.581076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.581192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.581224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.581350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.581393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.581535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.581568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.581741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.581773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.581991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.582024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.582213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.582247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.582431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.582464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.582684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.582716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.582850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.582882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.583002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.583034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.583222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.985 [2024-11-18 13:10:03.583260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.985 qpair failed and we were unable to recover it. 00:27:05.985 [2024-11-18 13:10:03.583450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.583485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.583660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.583692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.583933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.583966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.584167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.584199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.584456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.584489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.584685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.584717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.584960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.584992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.585175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.585207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.585349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.585392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.585516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.585549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.585681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.585715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.585858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.585892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.586093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.586126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.586317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.586350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.586551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.586584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.586823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.586854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.586968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.587001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.587118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.587151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.587433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.587468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.587682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.587715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.587906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.587939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.588207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.588240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.588502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.588535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.588664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.588696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.588903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.588934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.589132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.589165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.589369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.589404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.986 [2024-11-18 13:10:03.589580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.589615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.589755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.589790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:05.986 [2024-11-18 13:10:03.589982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.590014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.590198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.986 [2024-11-18 13:10:03.590231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.590502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.986 [2024-11-18 13:10:03.590536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.590777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.590809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.591043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.591076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.591269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.591302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.591521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-11-18 13:10:03.591554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.986 qpair failed and we were unable to recover it. 00:27:05.986 [2024-11-18 13:10:03.591749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.591781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.592060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.592098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.592351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.592394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.592597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.592629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.592845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.592877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.592994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.593027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.593223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.593255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.593521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.593555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.593750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.593781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.593979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.594010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.594212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.594244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.594510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.594544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.594728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.594761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.594933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.594965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.595241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.595274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.595406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.595440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.595679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.595712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.595895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.595928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.596109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.596141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.596393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.596427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.596603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.596634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.596847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.596879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.597134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.597166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.597462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.597495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.597633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.597666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.597862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.597894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.598197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.598230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.598463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.598495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.598728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.598768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.598921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.598953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.599142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.599176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.599369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.599405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.599720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.599755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.599888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.599923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.600096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.600130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.600411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.600447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.600647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.600681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.987 [2024-11-18 13:10:03.600824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-11-18 13:10:03.600857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.987 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.601000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.601034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.601300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.601333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.601537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.601570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.601754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.601795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.601984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.602017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.602255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.602287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.602550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.602584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.602768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.602800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.603066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.603099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.603203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.603235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.603505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.603539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.603791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.603823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.604125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.604158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.604347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.604406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.604645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.604676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.604890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.604922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.605159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.605191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.605437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.605470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.605602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.605634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.605805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.605837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.606092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.606125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.606316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.606348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.606532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.606564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.606713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.606745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.607009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.607041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.607282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.607313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.607453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.607486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.607663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.607696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.607891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.607923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.608159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.608191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.608510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.608562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.608834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.608867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.609063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.609095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.609240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.609272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.609451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.609484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.609750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.609782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.610052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.610085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.988 [2024-11-18 13:10:03.610278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-11-18 13:10:03.610308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.988 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.610487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.610521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.610759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.610791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.610964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.610995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.611188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.611221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.611410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.611444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.611626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.611666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.611798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.611830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.611989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.612021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.612208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.612240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.612419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.612452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.612640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.612672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.612908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.612940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.613113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.613146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.613266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.613297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.613488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.613521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.613785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.613818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.614034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.614065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.614361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.614396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.614587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.614619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.614864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.614897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.615137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.615170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.615343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.615385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.615603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.615636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.615898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.615933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.616151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.616186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.616391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.616426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.616556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.616589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.616731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.616764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.617025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.617059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.617345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.617389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.617647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.617685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.617859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.617892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.618115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.989 [2024-11-18 13:10:03.618176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.989 qpair failed and we were unable to recover it. 00:27:05.989 [2024-11-18 13:10:03.618486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.618525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.618815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.618848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.619132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.619165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.619487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.619522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.619853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.619887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.620070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.620103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.620344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.620386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.620563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.620595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.620883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.620915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.621110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.621142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.621366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.621399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.621581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.621613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.621902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.621935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.622055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.622088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 Malloc0 00:27:05.990 [2024-11-18 13:10:03.622328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.622369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.622572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.622605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.622841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.622875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.990 [2024-11-18 13:10:03.623059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.623091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.623277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:05.990 [2024-11-18 13:10:03.623310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.623562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.623596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.990 [2024-11-18 13:10:03.623886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.623917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.624123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.624156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.624428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.624461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.624683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.624715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.624916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.624955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.625246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.625278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.625532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.625565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.625780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.625813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.626058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.626091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.626263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.626295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.626492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.626525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.626707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.626739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.626858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.626891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.627151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.627183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.627453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.627486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.627732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.627764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.990 qpair failed and we were unable to recover it. 00:27:05.990 [2024-11-18 13:10:03.627980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.990 [2024-11-18 13:10:03.628011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.991 qpair failed and we were unable to recover it. 00:27:05.991 [2024-11-18 13:10:03.628270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.991 [2024-11-18 13:10:03.628303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:05.991 qpair failed and we were unable to recover it. 00:27:05.991 [2024-11-18 13:10:03.628514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.991 [2024-11-18 13:10:03.628551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.991 qpair failed and we were unable to recover it. 00:27:05.991 [2024-11-18 13:10:03.628816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.991 [2024-11-18 13:10:03.628849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.991 qpair failed and we were unable to recover it. 00:27:05.991 [2024-11-18 13:10:03.629043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.991 [2024-11-18 13:10:03.629075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.991 qpair failed and we were unable to recover it. 00:27:05.991 [2024-11-18 13:10:03.629344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.991 [2024-11-18 13:10:03.629388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.991 qpair failed and we were unable to recover it. 00:27:05.991 [2024-11-18 13:10:03.629602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.991 [2024-11-18 13:10:03.629634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.991 qpair failed and we were unable to recover it. 00:27:05.991 [2024-11-18 13:10:03.629751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.991 [2024-11-18 13:10:03.629875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.991 [2024-11-18 13:10:03.629908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:05.991 qpair failed and we were unable to recover it. 00:27:06.253 [2024-11-18 13:10:03.630095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-11-18 13:10:03.630128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-11-18 13:10:03.630395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-11-18 13:10:03.630430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-11-18 13:10:03.630683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-11-18 13:10:03.630723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-11-18 13:10:03.630933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-11-18 13:10:03.630966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-11-18 13:10:03.631146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-11-18 13:10:03.631178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-11-18 13:10:03.631418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-11-18 13:10:03.631450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-11-18 13:10:03.631641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-11-18 13:10:03.631673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad1c000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-11-18 13:10:03.631937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-11-18 13:10:03.631979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-11-18 13:10:03.632127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-11-18 13:10:03.632164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-11-18 13:10:03.632394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-11-18 13:10:03.632431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.253 [2024-11-18 13:10:03.632611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.253 [2024-11-18 13:10:03.632643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.253 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.632826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.632859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.633128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.633161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.633400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.633433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.633673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.633706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.633890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.633922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.634057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.634090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.634206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.634239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.634482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.634515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.634685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.634718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.634901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.634934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.635193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.635226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.635451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.635485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.635712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.635744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.635914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.635946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.636131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.636164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.636386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.636419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.636684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.636716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.636909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.636942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.637220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.637252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.637494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.637528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.637953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.637991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.638412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.638451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73fba0 with addr=10.0.0.2, port=4420 00:27:06.254 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.638665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.638701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:06.254 [2024-11-18 13:10:03.638895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.638927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.639142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.254 [2024-11-18 13:10:03.639174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.639371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.639404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.254 [2024-11-18 13:10:03.639646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.639677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.639880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.639913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.640183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.640216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.640506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.640539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.640779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.640811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.641026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.641058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.641258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.641290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.641538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.641570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.641754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.641786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.254 qpair failed and we were unable to recover it. 00:27:06.254 [2024-11-18 13:10:03.641914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.254 [2024-11-18 13:10:03.641947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.642212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.642244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.642468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.642501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.642741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.642773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.642945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.642976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.643213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.643245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.643504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.643537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.643733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.643766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.643950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.643982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.644172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.644204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.644467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.644501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.644742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.644774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.645039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.645071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad18000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.645283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.645329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.645587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.645622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.645890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.645923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.646106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.646139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.646380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.646414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.255 [2024-11-18 13:10:03.646597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.646630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:06.255 [2024-11-18 13:10:03.646914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.646948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.255 [2024-11-18 13:10:03.647200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.647233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.647398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.647431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.255 [2024-11-18 13:10:03.647571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.647604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.647870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.647903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.648091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.648124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.648305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.648338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.648619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.648652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.648911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.648944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.649190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.649222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.649435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.649468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.649726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.649758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.650044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.650076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.650348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.650390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.650660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.650693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.650874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.650906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.651165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.651197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.255 [2024-11-18 13:10:03.651475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.255 [2024-11-18 13:10:03.651509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.255 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.651700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.651732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.651916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.651949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.652137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.652170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.652346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.652387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.652574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.652606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.652873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.652905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.653185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.653217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.653407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.653440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.653632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.653664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.653902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.653934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.654145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.654177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.654445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.654479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.256 [2024-11-18 13:10:03.654603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.654636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.654904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.256 [2024-11-18 13:10:03.654936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.655153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.655184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.256 [2024-11-18 13:10:03.655471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.655505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.256 [2024-11-18 13:10:03.655634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.655667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.655910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.655942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.656209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.656241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.656529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.656563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.656838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.656871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.657141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.657174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.657373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.657406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.657655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.657687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.657910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.256 [2024-11-18 13:10:03.657943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fad24000b90 with addr=10.0.0.2, port=4420 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 [2024-11-18 13:10:03.657992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.256 [2024-11-18 13:10:03.660443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.256 [2024-11-18 13:10:03.660573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.256 [2024-11-18 13:10:03.660618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.256 [2024-11-18 13:10:03.660642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.256 [2024-11-18 13:10:03.660664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.256 [2024-11-18 13:10:03.660718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.256 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:06.256 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.256 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.256 [2024-11-18 13:10:03.670367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.256 [2024-11-18 13:10:03.670468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.256 [2024-11-18 13:10:03.670509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.256 [2024-11-18 13:10:03.670532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.256 [2024-11-18 13:10:03.670553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.256 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.256 [2024-11-18 13:10:03.670601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.256 qpair failed and we were unable to recover it. 00:27:06.256 13:10:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2483203 00:27:06.256 [2024-11-18 13:10:03.680402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.256 [2024-11-18 13:10:03.680496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.256 [2024-11-18 13:10:03.680523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.256 [2024-11-18 13:10:03.680539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.256 [2024-11-18 13:10:03.680554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.257 [2024-11-18 13:10:03.680586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-11-18 13:10:03.690371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.257 [2024-11-18 13:10:03.690440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.257 [2024-11-18 13:10:03.690460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.257 [2024-11-18 13:10:03.690474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.257 [2024-11-18 13:10:03.690484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.257 [2024-11-18 13:10:03.690507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-11-18 13:10:03.700329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.257 [2024-11-18 13:10:03.700392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.257 [2024-11-18 13:10:03.700406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.257 [2024-11-18 13:10:03.700414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.257 [2024-11-18 13:10:03.700421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.257 [2024-11-18 13:10:03.700437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-11-18 13:10:03.710329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.257 [2024-11-18 13:10:03.710393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.257 [2024-11-18 13:10:03.710410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.257 [2024-11-18 13:10:03.710418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.257 [2024-11-18 13:10:03.710425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.257 [2024-11-18 13:10:03.710441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-11-18 13:10:03.720367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.257 [2024-11-18 13:10:03.720419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.257 [2024-11-18 13:10:03.720433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.257 [2024-11-18 13:10:03.720440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.257 [2024-11-18 13:10:03.720447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.257 [2024-11-18 13:10:03.720464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-11-18 13:10:03.730408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.257 [2024-11-18 13:10:03.730469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.257 [2024-11-18 13:10:03.730483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.257 [2024-11-18 13:10:03.730491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.257 [2024-11-18 13:10:03.730498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.257 [2024-11-18 13:10:03.730513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-11-18 13:10:03.740514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.257 [2024-11-18 13:10:03.740618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.257 [2024-11-18 13:10:03.740633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.257 [2024-11-18 13:10:03.740640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.257 [2024-11-18 13:10:03.740647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.257 [2024-11-18 13:10:03.740663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-11-18 13:10:03.750514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.257 [2024-11-18 13:10:03.750616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.257 [2024-11-18 13:10:03.750630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.257 [2024-11-18 13:10:03.750638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.257 [2024-11-18 13:10:03.750644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.257 [2024-11-18 13:10:03.750660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-11-18 13:10:03.760491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.257 [2024-11-18 13:10:03.760547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.257 [2024-11-18 13:10:03.760561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.257 [2024-11-18 13:10:03.760568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.257 [2024-11-18 13:10:03.760575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.257 [2024-11-18 13:10:03.760591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-11-18 13:10:03.770515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.257 [2024-11-18 13:10:03.770574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.257 [2024-11-18 13:10:03.770588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.257 [2024-11-18 13:10:03.770595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.257 [2024-11-18 13:10:03.770602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.257 [2024-11-18 13:10:03.770617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-11-18 13:10:03.780550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.257 [2024-11-18 13:10:03.780616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.257 [2024-11-18 13:10:03.780629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.257 [2024-11-18 13:10:03.780637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.257 [2024-11-18 13:10:03.780644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.257 [2024-11-18 13:10:03.780659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-11-18 13:10:03.790564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.257 [2024-11-18 13:10:03.790619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.257 [2024-11-18 13:10:03.790633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.257 [2024-11-18 13:10:03.790640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.257 [2024-11-18 13:10:03.790647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.257 [2024-11-18 13:10:03.790661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-11-18 13:10:03.800607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.257 [2024-11-18 13:10:03.800666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.257 [2024-11-18 13:10:03.800680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.257 [2024-11-18 13:10:03.800688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.257 [2024-11-18 13:10:03.800695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.257 [2024-11-18 13:10:03.800710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.257 qpair failed and we were unable to recover it. 00:27:06.257 [2024-11-18 13:10:03.810625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.257 [2024-11-18 13:10:03.810690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.257 [2024-11-18 13:10:03.810704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.257 [2024-11-18 13:10:03.810712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.257 [2024-11-18 13:10:03.810719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.258 [2024-11-18 13:10:03.810734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-11-18 13:10:03.820649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.258 [2024-11-18 13:10:03.820705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.258 [2024-11-18 13:10:03.820719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.258 [2024-11-18 13:10:03.820729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.258 [2024-11-18 13:10:03.820736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.258 [2024-11-18 13:10:03.820751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-11-18 13:10:03.830681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.258 [2024-11-18 13:10:03.830736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.258 [2024-11-18 13:10:03.830750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.258 [2024-11-18 13:10:03.830758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.258 [2024-11-18 13:10:03.830764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.258 [2024-11-18 13:10:03.830779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-11-18 13:10:03.840629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.258 [2024-11-18 13:10:03.840684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.258 [2024-11-18 13:10:03.840698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.258 [2024-11-18 13:10:03.840705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.258 [2024-11-18 13:10:03.840712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.258 [2024-11-18 13:10:03.840727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-11-18 13:10:03.850729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.258 [2024-11-18 13:10:03.850786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.258 [2024-11-18 13:10:03.850800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.258 [2024-11-18 13:10:03.850808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.258 [2024-11-18 13:10:03.850815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.258 [2024-11-18 13:10:03.850831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-11-18 13:10:03.860767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.258 [2024-11-18 13:10:03.860822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.258 [2024-11-18 13:10:03.860838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.258 [2024-11-18 13:10:03.860846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.258 [2024-11-18 13:10:03.860852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.258 [2024-11-18 13:10:03.860872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-11-18 13:10:03.870797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.258 [2024-11-18 13:10:03.870851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.258 [2024-11-18 13:10:03.870865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.258 [2024-11-18 13:10:03.870872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.258 [2024-11-18 13:10:03.870879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.258 [2024-11-18 13:10:03.870894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-11-18 13:10:03.880808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.258 [2024-11-18 13:10:03.880860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.258 [2024-11-18 13:10:03.880873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.258 [2024-11-18 13:10:03.880881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.258 [2024-11-18 13:10:03.880887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.258 [2024-11-18 13:10:03.880903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-11-18 13:10:03.890789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.258 [2024-11-18 13:10:03.890844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.258 [2024-11-18 13:10:03.890858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.258 [2024-11-18 13:10:03.890866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.258 [2024-11-18 13:10:03.890873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.258 [2024-11-18 13:10:03.890888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-11-18 13:10:03.900878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.258 [2024-11-18 13:10:03.900932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.258 [2024-11-18 13:10:03.900948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.258 [2024-11-18 13:10:03.900956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.258 [2024-11-18 13:10:03.900963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.258 [2024-11-18 13:10:03.900979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-11-18 13:10:03.910911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.258 [2024-11-18 13:10:03.910965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.258 [2024-11-18 13:10:03.910979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.258 [2024-11-18 13:10:03.910986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.258 [2024-11-18 13:10:03.910993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.258 [2024-11-18 13:10:03.911009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.258 qpair failed and we were unable to recover it. 00:27:06.258 [2024-11-18 13:10:03.920933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.258 [2024-11-18 13:10:03.920985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.259 [2024-11-18 13:10:03.920999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.259 [2024-11-18 13:10:03.921006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.259 [2024-11-18 13:10:03.921013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.259 [2024-11-18 13:10:03.921027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-11-18 13:10:03.930942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.259 [2024-11-18 13:10:03.931000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.259 [2024-11-18 13:10:03.931014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.259 [2024-11-18 13:10:03.931022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.259 [2024-11-18 13:10:03.931029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.259 [2024-11-18 13:10:03.931044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.259 [2024-11-18 13:10:03.940994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.259 [2024-11-18 13:10:03.941052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.259 [2024-11-18 13:10:03.941067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.259 [2024-11-18 13:10:03.941074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.259 [2024-11-18 13:10:03.941081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.259 [2024-11-18 13:10:03.941096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.259 qpair failed and we were unable to recover it. 00:27:06.520 [2024-11-18 13:10:03.951034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.520 [2024-11-18 13:10:03.951090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.520 [2024-11-18 13:10:03.951109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.520 [2024-11-18 13:10:03.951117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.520 [2024-11-18 13:10:03.951123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.520 [2024-11-18 13:10:03.951139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.520 qpair failed and we were unable to recover it. 00:27:06.520 [2024-11-18 13:10:03.961044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.520 [2024-11-18 13:10:03.961102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.520 [2024-11-18 13:10:03.961115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.520 [2024-11-18 13:10:03.961123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.520 [2024-11-18 13:10:03.961129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.520 [2024-11-18 13:10:03.961145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.520 qpair failed and we were unable to recover it. 00:27:06.520 [2024-11-18 13:10:03.971094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.521 [2024-11-18 13:10:03.971155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.521 [2024-11-18 13:10:03.971169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.521 [2024-11-18 13:10:03.971176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.521 [2024-11-18 13:10:03.971183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.521 [2024-11-18 13:10:03.971199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.521 qpair failed and we were unable to recover it. 00:27:06.521 [2024-11-18 13:10:03.981126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.521 [2024-11-18 13:10:03.981182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.521 [2024-11-18 13:10:03.981196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.521 [2024-11-18 13:10:03.981203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.521 [2024-11-18 13:10:03.981210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.521 [2024-11-18 13:10:03.981227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.521 qpair failed and we were unable to recover it. 00:27:06.521 [2024-11-18 13:10:03.991180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.521 [2024-11-18 13:10:03.991234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.521 [2024-11-18 13:10:03.991248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.521 [2024-11-18 13:10:03.991255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.521 [2024-11-18 13:10:03.991265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.521 [2024-11-18 13:10:03.991281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.521 qpair failed and we were unable to recover it. 00:27:06.521 [2024-11-18 13:10:04.001170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.521 [2024-11-18 13:10:04.001226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.521 [2024-11-18 13:10:04.001241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.521 [2024-11-18 13:10:04.001249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.521 [2024-11-18 13:10:04.001255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.521 [2024-11-18 13:10:04.001271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.521 qpair failed and we were unable to recover it. 00:27:06.521 [2024-11-18 13:10:04.011218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.521 [2024-11-18 13:10:04.011278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.521 [2024-11-18 13:10:04.011293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.521 [2024-11-18 13:10:04.011300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.521 [2024-11-18 13:10:04.011307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.521 [2024-11-18 13:10:04.011323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.521 qpair failed and we were unable to recover it. 00:27:06.521 [2024-11-18 13:10:04.021237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.521 [2024-11-18 13:10:04.021296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.521 [2024-11-18 13:10:04.021310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.521 [2024-11-18 13:10:04.021318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.521 [2024-11-18 13:10:04.021324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.521 [2024-11-18 13:10:04.021340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.521 qpair failed and we were unable to recover it. 00:27:06.521 [2024-11-18 13:10:04.031286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.521 [2024-11-18 13:10:04.031347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.521 [2024-11-18 13:10:04.031365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.521 [2024-11-18 13:10:04.031373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.521 [2024-11-18 13:10:04.031380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.521 [2024-11-18 13:10:04.031396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.521 qpair failed and we were unable to recover it. 00:27:06.521 [2024-11-18 13:10:04.041282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.521 [2024-11-18 13:10:04.041343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.521 [2024-11-18 13:10:04.041361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.521 [2024-11-18 13:10:04.041368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.521 [2024-11-18 13:10:04.041375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.521 [2024-11-18 13:10:04.041390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.521 qpair failed and we were unable to recover it. 00:27:06.521 [2024-11-18 13:10:04.051259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.521 [2024-11-18 13:10:04.051334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.521 [2024-11-18 13:10:04.051349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.521 [2024-11-18 13:10:04.051360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.521 [2024-11-18 13:10:04.051367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.521 [2024-11-18 13:10:04.051383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.521 qpair failed and we were unable to recover it. 00:27:06.521 [2024-11-18 13:10:04.061367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.521 [2024-11-18 13:10:04.061433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.521 [2024-11-18 13:10:04.061447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.521 [2024-11-18 13:10:04.061455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.521 [2024-11-18 13:10:04.061462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.521 [2024-11-18 13:10:04.061477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.521 qpair failed and we were unable to recover it. 00:27:06.521 [2024-11-18 13:10:04.071387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.521 [2024-11-18 13:10:04.071442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.521 [2024-11-18 13:10:04.071456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.521 [2024-11-18 13:10:04.071463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.521 [2024-11-18 13:10:04.071471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.521 [2024-11-18 13:10:04.071487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.521 qpair failed and we were unable to recover it. 00:27:06.521 [2024-11-18 13:10:04.081412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.521 [2024-11-18 13:10:04.081470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.521 [2024-11-18 13:10:04.081488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.521 [2024-11-18 13:10:04.081495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.521 [2024-11-18 13:10:04.081502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.521 [2024-11-18 13:10:04.081518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.521 qpair failed and we were unable to recover it. 00:27:06.521 [2024-11-18 13:10:04.091386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.521 [2024-11-18 13:10:04.091453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.521 [2024-11-18 13:10:04.091467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.521 [2024-11-18 13:10:04.091475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.521 [2024-11-18 13:10:04.091481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.521 [2024-11-18 13:10:04.091497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.521 qpair failed and we were unable to recover it. 00:27:06.521 [2024-11-18 13:10:04.101510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.521 [2024-11-18 13:10:04.101614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.522 [2024-11-18 13:10:04.101628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.522 [2024-11-18 13:10:04.101636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.522 [2024-11-18 13:10:04.101643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.522 [2024-11-18 13:10:04.101659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.522 qpair failed and we were unable to recover it. 00:27:06.522 [2024-11-18 13:10:04.111507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.522 [2024-11-18 13:10:04.111561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.522 [2024-11-18 13:10:04.111575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.522 [2024-11-18 13:10:04.111583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.522 [2024-11-18 13:10:04.111590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.522 [2024-11-18 13:10:04.111606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.522 qpair failed and we were unable to recover it. 00:27:06.522 [2024-11-18 13:10:04.121560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.522 [2024-11-18 13:10:04.121613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.522 [2024-11-18 13:10:04.121627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.522 [2024-11-18 13:10:04.121635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.522 [2024-11-18 13:10:04.121645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.522 [2024-11-18 13:10:04.121662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.522 qpair failed and we were unable to recover it. 00:27:06.522 [2024-11-18 13:10:04.131551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.522 [2024-11-18 13:10:04.131616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.522 [2024-11-18 13:10:04.131630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.522 [2024-11-18 13:10:04.131638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.522 [2024-11-18 13:10:04.131645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.522 [2024-11-18 13:10:04.131661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.522 qpair failed and we were unable to recover it. 00:27:06.522 [2024-11-18 13:10:04.141557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.522 [2024-11-18 13:10:04.141624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.522 [2024-11-18 13:10:04.141638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.522 [2024-11-18 13:10:04.141646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.522 [2024-11-18 13:10:04.141653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.522 [2024-11-18 13:10:04.141669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.522 qpair failed and we were unable to recover it. 00:27:06.522 [2024-11-18 13:10:04.151554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.522 [2024-11-18 13:10:04.151611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.522 [2024-11-18 13:10:04.151627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.522 [2024-11-18 13:10:04.151634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.522 [2024-11-18 13:10:04.151641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.522 [2024-11-18 13:10:04.151657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.522 qpair failed and we were unable to recover it. 00:27:06.522 [2024-11-18 13:10:04.161643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.522 [2024-11-18 13:10:04.161703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.522 [2024-11-18 13:10:04.161717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.522 [2024-11-18 13:10:04.161725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.522 [2024-11-18 13:10:04.161732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.522 [2024-11-18 13:10:04.161747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.522 qpair failed and we were unable to recover it. 00:27:06.522 [2024-11-18 13:10:04.171695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.522 [2024-11-18 13:10:04.171770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.522 [2024-11-18 13:10:04.171786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.522 [2024-11-18 13:10:04.171794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.522 [2024-11-18 13:10:04.171801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.522 [2024-11-18 13:10:04.171817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.522 qpair failed and we were unable to recover it. 00:27:06.522 [2024-11-18 13:10:04.181660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.522 [2024-11-18 13:10:04.181713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.522 [2024-11-18 13:10:04.181727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.522 [2024-11-18 13:10:04.181735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.522 [2024-11-18 13:10:04.181741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.522 [2024-11-18 13:10:04.181757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.522 qpair failed and we were unable to recover it. 00:27:06.522 [2024-11-18 13:10:04.191736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.522 [2024-11-18 13:10:04.191792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.522 [2024-11-18 13:10:04.191807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.522 [2024-11-18 13:10:04.191815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.522 [2024-11-18 13:10:04.191822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.522 [2024-11-18 13:10:04.191838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.522 qpair failed and we were unable to recover it. 00:27:06.522 [2024-11-18 13:10:04.201762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.522 [2024-11-18 13:10:04.201819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.522 [2024-11-18 13:10:04.201833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.522 [2024-11-18 13:10:04.201841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.522 [2024-11-18 13:10:04.201848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.522 [2024-11-18 13:10:04.201864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.522 qpair failed and we were unable to recover it. 00:27:06.522 [2024-11-18 13:10:04.211823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.522 [2024-11-18 13:10:04.211883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.522 [2024-11-18 13:10:04.211900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.522 [2024-11-18 13:10:04.211908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.522 [2024-11-18 13:10:04.211914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.522 [2024-11-18 13:10:04.211929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.522 qpair failed and we were unable to recover it. 00:27:06.783 [2024-11-18 13:10:04.221837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.783 [2024-11-18 13:10:04.221896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.783 [2024-11-18 13:10:04.221910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.783 [2024-11-18 13:10:04.221919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.783 [2024-11-18 13:10:04.221926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.783 [2024-11-18 13:10:04.221943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.783 qpair failed and we were unable to recover it. 00:27:06.783 [2024-11-18 13:10:04.231787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.783 [2024-11-18 13:10:04.231842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.783 [2024-11-18 13:10:04.231856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.783 [2024-11-18 13:10:04.231863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.783 [2024-11-18 13:10:04.231870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.783 [2024-11-18 13:10:04.231885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.783 qpair failed and we were unable to recover it. 00:27:06.783 [2024-11-18 13:10:04.241831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.783 [2024-11-18 13:10:04.241884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.783 [2024-11-18 13:10:04.241899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.783 [2024-11-18 13:10:04.241906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.783 [2024-11-18 13:10:04.241914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.783 [2024-11-18 13:10:04.241929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.783 qpair failed and we were unable to recover it. 00:27:06.783 [2024-11-18 13:10:04.251854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.783 [2024-11-18 13:10:04.251912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.783 [2024-11-18 13:10:04.251927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.783 [2024-11-18 13:10:04.251938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.783 [2024-11-18 13:10:04.251945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.783 [2024-11-18 13:10:04.251961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.783 qpair failed and we were unable to recover it. 00:27:06.783 [2024-11-18 13:10:04.261872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.783 [2024-11-18 13:10:04.261945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.783 [2024-11-18 13:10:04.261959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.783 [2024-11-18 13:10:04.261967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.783 [2024-11-18 13:10:04.261974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.783 [2024-11-18 13:10:04.261990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.783 qpair failed and we were unable to recover it. 00:27:06.783 [2024-11-18 13:10:04.271892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.783 [2024-11-18 13:10:04.271950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.783 [2024-11-18 13:10:04.271964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.783 [2024-11-18 13:10:04.271972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.783 [2024-11-18 13:10:04.271978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.783 [2024-11-18 13:10:04.271993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.783 qpair failed and we were unable to recover it. 00:27:06.783 [2024-11-18 13:10:04.281938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.783 [2024-11-18 13:10:04.281992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.783 [2024-11-18 13:10:04.282007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.783 [2024-11-18 13:10:04.282014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.783 [2024-11-18 13:10:04.282020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.783 [2024-11-18 13:10:04.282036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.783 qpair failed and we were unable to recover it. 00:27:06.783 [2024-11-18 13:10:04.291999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.783 [2024-11-18 13:10:04.292063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.783 [2024-11-18 13:10:04.292076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.783 [2024-11-18 13:10:04.292084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.783 [2024-11-18 13:10:04.292090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.783 [2024-11-18 13:10:04.292107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.783 qpair failed and we were unable to recover it. 00:27:06.783 [2024-11-18 13:10:04.302057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.783 [2024-11-18 13:10:04.302124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.783 [2024-11-18 13:10:04.302138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.783 [2024-11-18 13:10:04.302145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.783 [2024-11-18 13:10:04.302151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.783 [2024-11-18 13:10:04.302167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.783 qpair failed and we were unable to recover it. 00:27:06.783 [2024-11-18 13:10:04.312129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.783 [2024-11-18 13:10:04.312235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.783 [2024-11-18 13:10:04.312249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.783 [2024-11-18 13:10:04.312257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.783 [2024-11-18 13:10:04.312264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.783 [2024-11-18 13:10:04.312279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.783 qpair failed and we were unable to recover it. 00:27:06.783 [2024-11-18 13:10:04.322106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.783 [2024-11-18 13:10:04.322163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.783 [2024-11-18 13:10:04.322177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.783 [2024-11-18 13:10:04.322185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.783 [2024-11-18 13:10:04.322191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.784 [2024-11-18 13:10:04.322207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.784 qpair failed and we were unable to recover it. 00:27:06.784 [2024-11-18 13:10:04.332108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.784 [2024-11-18 13:10:04.332163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.784 [2024-11-18 13:10:04.332177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.784 [2024-11-18 13:10:04.332185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.784 [2024-11-18 13:10:04.332191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.784 [2024-11-18 13:10:04.332207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.784 qpair failed and we were unable to recover it. 00:27:06.784 [2024-11-18 13:10:04.342102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.784 [2024-11-18 13:10:04.342159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.784 [2024-11-18 13:10:04.342174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.784 [2024-11-18 13:10:04.342183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.784 [2024-11-18 13:10:04.342191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.784 [2024-11-18 13:10:04.342206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.784 qpair failed and we were unable to recover it. 00:27:06.784 [2024-11-18 13:10:04.352123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.784 [2024-11-18 13:10:04.352183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.784 [2024-11-18 13:10:04.352196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.784 [2024-11-18 13:10:04.352204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.784 [2024-11-18 13:10:04.352210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.784 [2024-11-18 13:10:04.352226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.784 qpair failed and we were unable to recover it. 00:27:06.784 [2024-11-18 13:10:04.362227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.784 [2024-11-18 13:10:04.362284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.784 [2024-11-18 13:10:04.362298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.784 [2024-11-18 13:10:04.362306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.784 [2024-11-18 13:10:04.362312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.784 [2024-11-18 13:10:04.362327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.784 qpair failed and we were unable to recover it. 00:27:06.784 [2024-11-18 13:10:04.372178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.784 [2024-11-18 13:10:04.372245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.784 [2024-11-18 13:10:04.372259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.784 [2024-11-18 13:10:04.372267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.784 [2024-11-18 13:10:04.372273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.784 [2024-11-18 13:10:04.372288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.784 qpair failed and we were unable to recover it. 00:27:06.784 [2024-11-18 13:10:04.382267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.784 [2024-11-18 13:10:04.382324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.784 [2024-11-18 13:10:04.382338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.784 [2024-11-18 13:10:04.382356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.784 [2024-11-18 13:10:04.382363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.784 [2024-11-18 13:10:04.382379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.784 qpair failed and we were unable to recover it. 00:27:06.784 [2024-11-18 13:10:04.392319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.784 [2024-11-18 13:10:04.392375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.784 [2024-11-18 13:10:04.392390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.784 [2024-11-18 13:10:04.392397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.784 [2024-11-18 13:10:04.392405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.784 [2024-11-18 13:10:04.392420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.784 qpair failed and we were unable to recover it. 00:27:06.784 [2024-11-18 13:10:04.402369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.784 [2024-11-18 13:10:04.402421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.784 [2024-11-18 13:10:04.402434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.784 [2024-11-18 13:10:04.402442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.784 [2024-11-18 13:10:04.402448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.784 [2024-11-18 13:10:04.402464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.784 qpair failed and we were unable to recover it. 00:27:06.784 [2024-11-18 13:10:04.412288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.784 [2024-11-18 13:10:04.412347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.784 [2024-11-18 13:10:04.412367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.784 [2024-11-18 13:10:04.412374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.784 [2024-11-18 13:10:04.412380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.784 [2024-11-18 13:10:04.412396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.784 qpair failed and we were unable to recover it. 00:27:06.784 [2024-11-18 13:10:04.422395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.784 [2024-11-18 13:10:04.422451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.784 [2024-11-18 13:10:04.422465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.784 [2024-11-18 13:10:04.422473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.784 [2024-11-18 13:10:04.422480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.784 [2024-11-18 13:10:04.422499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.784 qpair failed and we were unable to recover it. 00:27:06.784 [2024-11-18 13:10:04.432426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.784 [2024-11-18 13:10:04.432516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.784 [2024-11-18 13:10:04.432531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.784 [2024-11-18 13:10:04.432539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.784 [2024-11-18 13:10:04.432545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.784 [2024-11-18 13:10:04.432561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.784 qpair failed and we were unable to recover it. 00:27:06.784 [2024-11-18 13:10:04.442505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.784 [2024-11-18 13:10:04.442560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.784 [2024-11-18 13:10:04.442574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.784 [2024-11-18 13:10:04.442582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.784 [2024-11-18 13:10:04.442589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.784 [2024-11-18 13:10:04.442605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.784 qpair failed and we were unable to recover it. 00:27:06.784 [2024-11-18 13:10:04.452413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.784 [2024-11-18 13:10:04.452468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.784 [2024-11-18 13:10:04.452482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.784 [2024-11-18 13:10:04.452489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.784 [2024-11-18 13:10:04.452495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.785 [2024-11-18 13:10:04.452511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.785 qpair failed and we were unable to recover it. 00:27:06.785 [2024-11-18 13:10:04.462682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.785 [2024-11-18 13:10:04.462755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.785 [2024-11-18 13:10:04.462769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.785 [2024-11-18 13:10:04.462776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.785 [2024-11-18 13:10:04.462783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.785 [2024-11-18 13:10:04.462799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.785 qpair failed and we were unable to recover it. 00:27:06.785 [2024-11-18 13:10:04.472499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.785 [2024-11-18 13:10:04.472556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.785 [2024-11-18 13:10:04.472570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.785 [2024-11-18 13:10:04.472577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.785 [2024-11-18 13:10:04.472584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:06.785 [2024-11-18 13:10:04.472599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.785 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-18 13:10:04.482633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.045 [2024-11-18 13:10:04.482686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.045 [2024-11-18 13:10:04.482700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.045 [2024-11-18 13:10:04.482707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.045 [2024-11-18 13:10:04.482714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.045 [2024-11-18 13:10:04.482730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-18 13:10:04.492618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.045 [2024-11-18 13:10:04.492674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.045 [2024-11-18 13:10:04.492688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.045 [2024-11-18 13:10:04.492695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.045 [2024-11-18 13:10:04.492702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.045 [2024-11-18 13:10:04.492718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-18 13:10:04.502639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.045 [2024-11-18 13:10:04.502696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.045 [2024-11-18 13:10:04.502709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.045 [2024-11-18 13:10:04.502716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.045 [2024-11-18 13:10:04.502723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.045 [2024-11-18 13:10:04.502739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-18 13:10:04.512649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.045 [2024-11-18 13:10:04.512705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.045 [2024-11-18 13:10:04.512723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.045 [2024-11-18 13:10:04.512730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.045 [2024-11-18 13:10:04.512737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.045 [2024-11-18 13:10:04.512752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-18 13:10:04.522673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.045 [2024-11-18 13:10:04.522726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.045 [2024-11-18 13:10:04.522740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.045 [2024-11-18 13:10:04.522748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.045 [2024-11-18 13:10:04.522755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.045 [2024-11-18 13:10:04.522770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-11-18 13:10:04.532707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.045 [2024-11-18 13:10:04.532772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.045 [2024-11-18 13:10:04.532786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.045 [2024-11-18 13:10:04.532793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.045 [2024-11-18 13:10:04.532800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.046 [2024-11-18 13:10:04.532815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-18 13:10:04.542739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.046 [2024-11-18 13:10:04.542799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.046 [2024-11-18 13:10:04.542813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.046 [2024-11-18 13:10:04.542821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.046 [2024-11-18 13:10:04.542827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.046 [2024-11-18 13:10:04.542842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-18 13:10:04.552686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.046 [2024-11-18 13:10:04.552749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.046 [2024-11-18 13:10:04.552764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.046 [2024-11-18 13:10:04.552772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.046 [2024-11-18 13:10:04.552783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.046 [2024-11-18 13:10:04.552799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-18 13:10:04.562789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.046 [2024-11-18 13:10:04.562846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.046 [2024-11-18 13:10:04.562859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.046 [2024-11-18 13:10:04.562867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.046 [2024-11-18 13:10:04.562874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.046 [2024-11-18 13:10:04.562890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-18 13:10:04.572883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.046 [2024-11-18 13:10:04.572985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.046 [2024-11-18 13:10:04.572999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.046 [2024-11-18 13:10:04.573006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.046 [2024-11-18 13:10:04.573013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.046 [2024-11-18 13:10:04.573028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-18 13:10:04.582847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.046 [2024-11-18 13:10:04.582908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.046 [2024-11-18 13:10:04.582922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.046 [2024-11-18 13:10:04.582930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.046 [2024-11-18 13:10:04.582936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.046 [2024-11-18 13:10:04.582951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-18 13:10:04.592835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.046 [2024-11-18 13:10:04.592910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.046 [2024-11-18 13:10:04.592925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.046 [2024-11-18 13:10:04.592933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.046 [2024-11-18 13:10:04.592939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.046 [2024-11-18 13:10:04.592954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-18 13:10:04.602899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.046 [2024-11-18 13:10:04.602954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.046 [2024-11-18 13:10:04.602968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.046 [2024-11-18 13:10:04.602975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.046 [2024-11-18 13:10:04.602982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.046 [2024-11-18 13:10:04.602998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-18 13:10:04.612950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.046 [2024-11-18 13:10:04.613009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.046 [2024-11-18 13:10:04.613023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.046 [2024-11-18 13:10:04.613030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.046 [2024-11-18 13:10:04.613037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.046 [2024-11-18 13:10:04.613052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-18 13:10:04.622963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.046 [2024-11-18 13:10:04.623021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.046 [2024-11-18 13:10:04.623035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.046 [2024-11-18 13:10:04.623042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.046 [2024-11-18 13:10:04.623049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.046 [2024-11-18 13:10:04.623064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-18 13:10:04.632978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.046 [2024-11-18 13:10:04.633030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.046 [2024-11-18 13:10:04.633045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.046 [2024-11-18 13:10:04.633052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.046 [2024-11-18 13:10:04.633059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.046 [2024-11-18 13:10:04.633074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-18 13:10:04.643067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.046 [2024-11-18 13:10:04.643128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.046 [2024-11-18 13:10:04.643146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.046 [2024-11-18 13:10:04.643155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.046 [2024-11-18 13:10:04.643161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.046 [2024-11-18 13:10:04.643176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-18 13:10:04.653062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.046 [2024-11-18 13:10:04.653126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.046 [2024-11-18 13:10:04.653140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.046 [2024-11-18 13:10:04.653149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.046 [2024-11-18 13:10:04.653156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.046 [2024-11-18 13:10:04.653171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-11-18 13:10:04.663121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.046 [2024-11-18 13:10:04.663181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.046 [2024-11-18 13:10:04.663195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.046 [2024-11-18 13:10:04.663202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.046 [2024-11-18 13:10:04.663209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.047 [2024-11-18 13:10:04.663224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-18 13:10:04.673102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.047 [2024-11-18 13:10:04.673157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.047 [2024-11-18 13:10:04.673172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.047 [2024-11-18 13:10:04.673180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.047 [2024-11-18 13:10:04.673187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.047 [2024-11-18 13:10:04.673202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-18 13:10:04.683097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.047 [2024-11-18 13:10:04.683152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.047 [2024-11-18 13:10:04.683167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.047 [2024-11-18 13:10:04.683174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.047 [2024-11-18 13:10:04.683184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.047 [2024-11-18 13:10:04.683200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-18 13:10:04.693160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.047 [2024-11-18 13:10:04.693220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.047 [2024-11-18 13:10:04.693234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.047 [2024-11-18 13:10:04.693241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.047 [2024-11-18 13:10:04.693248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.047 [2024-11-18 13:10:04.693263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-18 13:10:04.703114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.047 [2024-11-18 13:10:04.703202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.047 [2024-11-18 13:10:04.703216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.047 [2024-11-18 13:10:04.703224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.047 [2024-11-18 13:10:04.703230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.047 [2024-11-18 13:10:04.703245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-18 13:10:04.713236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.047 [2024-11-18 13:10:04.713321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.047 [2024-11-18 13:10:04.713363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.047 [2024-11-18 13:10:04.713371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.047 [2024-11-18 13:10:04.713378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.047 [2024-11-18 13:10:04.713403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-18 13:10:04.723245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.047 [2024-11-18 13:10:04.723300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.047 [2024-11-18 13:10:04.723315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.047 [2024-11-18 13:10:04.723322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.047 [2024-11-18 13:10:04.723329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.047 [2024-11-18 13:10:04.723346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-11-18 13:10:04.733280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.047 [2024-11-18 13:10:04.733336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.047 [2024-11-18 13:10:04.733350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.047 [2024-11-18 13:10:04.733361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.047 [2024-11-18 13:10:04.733367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.047 [2024-11-18 13:10:04.733384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.308 [2024-11-18 13:10:04.743318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.308 [2024-11-18 13:10:04.743378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.308 [2024-11-18 13:10:04.743393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.308 [2024-11-18 13:10:04.743401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.308 [2024-11-18 13:10:04.743407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.308 [2024-11-18 13:10:04.743424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-11-18 13:10:04.753363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.308 [2024-11-18 13:10:04.753420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.308 [2024-11-18 13:10:04.753434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.308 [2024-11-18 13:10:04.753441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.308 [2024-11-18 13:10:04.753448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.308 [2024-11-18 13:10:04.753464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-11-18 13:10:04.763349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.308 [2024-11-18 13:10:04.763417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.308 [2024-11-18 13:10:04.763431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.308 [2024-11-18 13:10:04.763439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.308 [2024-11-18 13:10:04.763446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.308 [2024-11-18 13:10:04.763461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-11-18 13:10:04.773400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.308 [2024-11-18 13:10:04.773469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.308 [2024-11-18 13:10:04.773486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.308 [2024-11-18 13:10:04.773494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.308 [2024-11-18 13:10:04.773500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.308 [2024-11-18 13:10:04.773516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-11-18 13:10:04.783358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.308 [2024-11-18 13:10:04.783426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.308 [2024-11-18 13:10:04.783440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.308 [2024-11-18 13:10:04.783447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.308 [2024-11-18 13:10:04.783454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.308 [2024-11-18 13:10:04.783469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-11-18 13:10:04.793458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.308 [2024-11-18 13:10:04.793509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.308 [2024-11-18 13:10:04.793522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.308 [2024-11-18 13:10:04.793530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.308 [2024-11-18 13:10:04.793537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.308 [2024-11-18 13:10:04.793552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-11-18 13:10:04.803458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.308 [2024-11-18 13:10:04.803513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.308 [2024-11-18 13:10:04.803527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.308 [2024-11-18 13:10:04.803534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.308 [2024-11-18 13:10:04.803541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.308 [2024-11-18 13:10:04.803557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-11-18 13:10:04.813503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.308 [2024-11-18 13:10:04.813564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.308 [2024-11-18 13:10:04.813578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.308 [2024-11-18 13:10:04.813588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.308 [2024-11-18 13:10:04.813596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.308 [2024-11-18 13:10:04.813612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-11-18 13:10:04.823529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.308 [2024-11-18 13:10:04.823586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.308 [2024-11-18 13:10:04.823601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.308 [2024-11-18 13:10:04.823608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.308 [2024-11-18 13:10:04.823615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.308 [2024-11-18 13:10:04.823630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-11-18 13:10:04.833561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.308 [2024-11-18 13:10:04.833613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.308 [2024-11-18 13:10:04.833627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.308 [2024-11-18 13:10:04.833634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.308 [2024-11-18 13:10:04.833641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.308 [2024-11-18 13:10:04.833656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.308 qpair failed and we were unable to recover it. 00:27:07.308 [2024-11-18 13:10:04.843546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.308 [2024-11-18 13:10:04.843601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.309 [2024-11-18 13:10:04.843615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.309 [2024-11-18 13:10:04.843623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.309 [2024-11-18 13:10:04.843629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.309 [2024-11-18 13:10:04.843645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-11-18 13:10:04.853624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.309 [2024-11-18 13:10:04.853684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.309 [2024-11-18 13:10:04.853698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.309 [2024-11-18 13:10:04.853705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.309 [2024-11-18 13:10:04.853712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.309 [2024-11-18 13:10:04.853728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-11-18 13:10:04.863656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.309 [2024-11-18 13:10:04.863713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.309 [2024-11-18 13:10:04.863727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.309 [2024-11-18 13:10:04.863734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.309 [2024-11-18 13:10:04.863741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.309 [2024-11-18 13:10:04.863757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-11-18 13:10:04.873680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.309 [2024-11-18 13:10:04.873764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.309 [2024-11-18 13:10:04.873777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.309 [2024-11-18 13:10:04.873784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.309 [2024-11-18 13:10:04.873791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.309 [2024-11-18 13:10:04.873806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-11-18 13:10:04.883706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.309 [2024-11-18 13:10:04.883764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.309 [2024-11-18 13:10:04.883778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.309 [2024-11-18 13:10:04.883785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.309 [2024-11-18 13:10:04.883792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.309 [2024-11-18 13:10:04.883808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-11-18 13:10:04.893743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.309 [2024-11-18 13:10:04.893806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.309 [2024-11-18 13:10:04.893819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.309 [2024-11-18 13:10:04.893826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.309 [2024-11-18 13:10:04.893834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.309 [2024-11-18 13:10:04.893850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-11-18 13:10:04.903755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.309 [2024-11-18 13:10:04.903812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.309 [2024-11-18 13:10:04.903826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.309 [2024-11-18 13:10:04.903833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.309 [2024-11-18 13:10:04.903840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.309 [2024-11-18 13:10:04.903855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-11-18 13:10:04.913777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.309 [2024-11-18 13:10:04.913852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.309 [2024-11-18 13:10:04.913867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.309 [2024-11-18 13:10:04.913874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.309 [2024-11-18 13:10:04.913881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.309 [2024-11-18 13:10:04.913897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-11-18 13:10:04.923798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.309 [2024-11-18 13:10:04.923853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.309 [2024-11-18 13:10:04.923867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.309 [2024-11-18 13:10:04.923874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.309 [2024-11-18 13:10:04.923881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.309 [2024-11-18 13:10:04.923896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-11-18 13:10:04.933884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.309 [2024-11-18 13:10:04.933940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.309 [2024-11-18 13:10:04.933954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.309 [2024-11-18 13:10:04.933961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.309 [2024-11-18 13:10:04.933968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.309 [2024-11-18 13:10:04.933983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-11-18 13:10:04.943907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.309 [2024-11-18 13:10:04.943972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.309 [2024-11-18 13:10:04.943986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.309 [2024-11-18 13:10:04.943997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.309 [2024-11-18 13:10:04.944003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.309 [2024-11-18 13:10:04.944019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-11-18 13:10:04.953951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.309 [2024-11-18 13:10:04.954035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.309 [2024-11-18 13:10:04.954048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.309 [2024-11-18 13:10:04.954055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.309 [2024-11-18 13:10:04.954062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.309 [2024-11-18 13:10:04.954077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.309 qpair failed and we were unable to recover it. 00:27:07.309 [2024-11-18 13:10:04.963925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.309 [2024-11-18 13:10:04.963978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.309 [2024-11-18 13:10:04.963992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.309 [2024-11-18 13:10:04.963999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.309 [2024-11-18 13:10:04.964006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.309 [2024-11-18 13:10:04.964022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-11-18 13:10:04.973967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.310 [2024-11-18 13:10:04.974023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.310 [2024-11-18 13:10:04.974037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.310 [2024-11-18 13:10:04.974044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.310 [2024-11-18 13:10:04.974052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.310 [2024-11-18 13:10:04.974067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-11-18 13:10:04.983990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.310 [2024-11-18 13:10:04.984042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.310 [2024-11-18 13:10:04.984056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.310 [2024-11-18 13:10:04.984063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.310 [2024-11-18 13:10:04.984070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.310 [2024-11-18 13:10:04.984089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-11-18 13:10:04.994011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.310 [2024-11-18 13:10:04.994067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.310 [2024-11-18 13:10:04.994081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.310 [2024-11-18 13:10:04.994088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.310 [2024-11-18 13:10:04.994095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.310 [2024-11-18 13:10:04.994110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.310 [2024-11-18 13:10:05.004039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.310 [2024-11-18 13:10:05.004090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.310 [2024-11-18 13:10:05.004104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.310 [2024-11-18 13:10:05.004111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.310 [2024-11-18 13:10:05.004118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.310 [2024-11-18 13:10:05.004133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.310 qpair failed and we were unable to recover it. 00:27:07.571 [2024-11-18 13:10:05.014082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.571 [2024-11-18 13:10:05.014145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.571 [2024-11-18 13:10:05.014159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.571 [2024-11-18 13:10:05.014167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.571 [2024-11-18 13:10:05.014173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.571 [2024-11-18 13:10:05.014189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-11-18 13:10:05.024148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.571 [2024-11-18 13:10:05.024205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.571 [2024-11-18 13:10:05.024219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.571 [2024-11-18 13:10:05.024226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.571 [2024-11-18 13:10:05.024233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.571 [2024-11-18 13:10:05.024249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-11-18 13:10:05.034124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.571 [2024-11-18 13:10:05.034178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.571 [2024-11-18 13:10:05.034192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.571 [2024-11-18 13:10:05.034199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.571 [2024-11-18 13:10:05.034206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.571 [2024-11-18 13:10:05.034222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-11-18 13:10:05.044160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.571 [2024-11-18 13:10:05.044218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.571 [2024-11-18 13:10:05.044232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.571 [2024-11-18 13:10:05.044240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.571 [2024-11-18 13:10:05.044246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.571 [2024-11-18 13:10:05.044262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-11-18 13:10:05.054175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.571 [2024-11-18 13:10:05.054230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.571 [2024-11-18 13:10:05.054243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.571 [2024-11-18 13:10:05.054251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.571 [2024-11-18 13:10:05.054257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.571 [2024-11-18 13:10:05.054273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-11-18 13:10:05.064221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.571 [2024-11-18 13:10:05.064275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.571 [2024-11-18 13:10:05.064290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.571 [2024-11-18 13:10:05.064297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.571 [2024-11-18 13:10:05.064304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.571 [2024-11-18 13:10:05.064319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-11-18 13:10:05.074249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.571 [2024-11-18 13:10:05.074303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.571 [2024-11-18 13:10:05.074320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.571 [2024-11-18 13:10:05.074327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.571 [2024-11-18 13:10:05.074334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.571 [2024-11-18 13:10:05.074349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-11-18 13:10:05.084282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.571 [2024-11-18 13:10:05.084383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.571 [2024-11-18 13:10:05.084397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.571 [2024-11-18 13:10:05.084405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.571 [2024-11-18 13:10:05.084411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.571 [2024-11-18 13:10:05.084427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-11-18 13:10:05.094366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.571 [2024-11-18 13:10:05.094446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.571 [2024-11-18 13:10:05.094461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.572 [2024-11-18 13:10:05.094468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.572 [2024-11-18 13:10:05.094474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.572 [2024-11-18 13:10:05.094489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-11-18 13:10:05.104312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.572 [2024-11-18 13:10:05.104401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.572 [2024-11-18 13:10:05.104415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.572 [2024-11-18 13:10:05.104422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.572 [2024-11-18 13:10:05.104428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.572 [2024-11-18 13:10:05.104445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-11-18 13:10:05.114363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.572 [2024-11-18 13:10:05.114417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.572 [2024-11-18 13:10:05.114433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.572 [2024-11-18 13:10:05.114441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.572 [2024-11-18 13:10:05.114451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.572 [2024-11-18 13:10:05.114467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-11-18 13:10:05.124391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.572 [2024-11-18 13:10:05.124447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.572 [2024-11-18 13:10:05.124462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.572 [2024-11-18 13:10:05.124470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.572 [2024-11-18 13:10:05.124477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.572 [2024-11-18 13:10:05.124492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-11-18 13:10:05.134369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.572 [2024-11-18 13:10:05.134432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.572 [2024-11-18 13:10:05.134446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.572 [2024-11-18 13:10:05.134454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.572 [2024-11-18 13:10:05.134461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.572 [2024-11-18 13:10:05.134477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-11-18 13:10:05.144382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.572 [2024-11-18 13:10:05.144448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.572 [2024-11-18 13:10:05.144461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.572 [2024-11-18 13:10:05.144469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.572 [2024-11-18 13:10:05.144476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.572 [2024-11-18 13:10:05.144491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-11-18 13:10:05.154481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.572 [2024-11-18 13:10:05.154533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.572 [2024-11-18 13:10:05.154547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.572 [2024-11-18 13:10:05.154555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.572 [2024-11-18 13:10:05.154562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.572 [2024-11-18 13:10:05.154577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-11-18 13:10:05.164503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.572 [2024-11-18 13:10:05.164558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.572 [2024-11-18 13:10:05.164572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.572 [2024-11-18 13:10:05.164581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.572 [2024-11-18 13:10:05.164587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.572 [2024-11-18 13:10:05.164602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-11-18 13:10:05.174551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.572 [2024-11-18 13:10:05.174608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.572 [2024-11-18 13:10:05.174622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.572 [2024-11-18 13:10:05.174629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.572 [2024-11-18 13:10:05.174636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.572 [2024-11-18 13:10:05.174652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-11-18 13:10:05.184504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.572 [2024-11-18 13:10:05.184554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.572 [2024-11-18 13:10:05.184568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.572 [2024-11-18 13:10:05.184576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.572 [2024-11-18 13:10:05.184582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.572 [2024-11-18 13:10:05.184598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-11-18 13:10:05.194606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.572 [2024-11-18 13:10:05.194655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.572 [2024-11-18 13:10:05.194668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.572 [2024-11-18 13:10:05.194676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.572 [2024-11-18 13:10:05.194683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.572 [2024-11-18 13:10:05.194698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-11-18 13:10:05.204621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.572 [2024-11-18 13:10:05.204676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.572 [2024-11-18 13:10:05.204693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.572 [2024-11-18 13:10:05.204701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.572 [2024-11-18 13:10:05.204708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.572 [2024-11-18 13:10:05.204724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-11-18 13:10:05.214665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.572 [2024-11-18 13:10:05.214722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.572 [2024-11-18 13:10:05.214736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.572 [2024-11-18 13:10:05.214744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.572 [2024-11-18 13:10:05.214751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.572 [2024-11-18 13:10:05.214766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-11-18 13:10:05.224686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.572 [2024-11-18 13:10:05.224740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.572 [2024-11-18 13:10:05.224753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.573 [2024-11-18 13:10:05.224760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.573 [2024-11-18 13:10:05.224767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.573 [2024-11-18 13:10:05.224782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-11-18 13:10:05.234682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.573 [2024-11-18 13:10:05.234781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.573 [2024-11-18 13:10:05.234795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.573 [2024-11-18 13:10:05.234802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.573 [2024-11-18 13:10:05.234809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.573 [2024-11-18 13:10:05.234825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-11-18 13:10:05.244738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.573 [2024-11-18 13:10:05.244794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.573 [2024-11-18 13:10:05.244809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.573 [2024-11-18 13:10:05.244817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.573 [2024-11-18 13:10:05.244827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.573 [2024-11-18 13:10:05.244842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-11-18 13:10:05.254786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.573 [2024-11-18 13:10:05.254858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.573 [2024-11-18 13:10:05.254872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.573 [2024-11-18 13:10:05.254881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.573 [2024-11-18 13:10:05.254887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.573 [2024-11-18 13:10:05.254905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-11-18 13:10:05.264796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.573 [2024-11-18 13:10:05.264853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.573 [2024-11-18 13:10:05.264867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.573 [2024-11-18 13:10:05.264874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.573 [2024-11-18 13:10:05.264881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.573 [2024-11-18 13:10:05.264897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.833 [2024-11-18 13:10:05.274815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.833 [2024-11-18 13:10:05.274895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.834 [2024-11-18 13:10:05.274910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.834 [2024-11-18 13:10:05.274917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.834 [2024-11-18 13:10:05.274924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.834 [2024-11-18 13:10:05.274940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.834 qpair failed and we were unable to recover it. 00:27:07.834 [2024-11-18 13:10:05.284909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.834 [2024-11-18 13:10:05.284963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.834 [2024-11-18 13:10:05.284977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.834 [2024-11-18 13:10:05.284984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.834 [2024-11-18 13:10:05.284991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.834 [2024-11-18 13:10:05.285007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.834 qpair failed and we were unable to recover it. 00:27:07.834 [2024-11-18 13:10:05.294920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.834 [2024-11-18 13:10:05.294997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.834 [2024-11-18 13:10:05.295012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.834 [2024-11-18 13:10:05.295019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.834 [2024-11-18 13:10:05.295026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.834 [2024-11-18 13:10:05.295041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.834 qpair failed and we were unable to recover it. 00:27:07.834 [2024-11-18 13:10:05.304853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.834 [2024-11-18 13:10:05.304913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.834 [2024-11-18 13:10:05.304929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.834 [2024-11-18 13:10:05.304936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.834 [2024-11-18 13:10:05.304943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.834 [2024-11-18 13:10:05.304960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.834 qpair failed and we were unable to recover it. 00:27:07.834 [2024-11-18 13:10:05.314965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.834 [2024-11-18 13:10:05.315018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.834 [2024-11-18 13:10:05.315032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.834 [2024-11-18 13:10:05.315040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.834 [2024-11-18 13:10:05.315046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.834 [2024-11-18 13:10:05.315062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.834 qpair failed and we were unable to recover it. 00:27:07.834 [2024-11-18 13:10:05.324979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.834 [2024-11-18 13:10:05.325035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.834 [2024-11-18 13:10:05.325049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.834 [2024-11-18 13:10:05.325057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.834 [2024-11-18 13:10:05.325063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.834 [2024-11-18 13:10:05.325079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.834 qpair failed and we were unable to recover it. 00:27:07.834 [2024-11-18 13:10:05.335005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.834 [2024-11-18 13:10:05.335073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.834 [2024-11-18 13:10:05.335090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.834 [2024-11-18 13:10:05.335098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.834 [2024-11-18 13:10:05.335104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.834 [2024-11-18 13:10:05.335119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.834 qpair failed and we were unable to recover it. 00:27:07.834 [2024-11-18 13:10:05.345036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.834 [2024-11-18 13:10:05.345093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.834 [2024-11-18 13:10:05.345109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.834 [2024-11-18 13:10:05.345117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.834 [2024-11-18 13:10:05.345124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.834 [2024-11-18 13:10:05.345140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.834 qpair failed and we were unable to recover it. 00:27:07.834 [2024-11-18 13:10:05.355072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.834 [2024-11-18 13:10:05.355124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.834 [2024-11-18 13:10:05.355138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.834 [2024-11-18 13:10:05.355145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.834 [2024-11-18 13:10:05.355151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.834 [2024-11-18 13:10:05.355166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.834 qpair failed and we were unable to recover it. 00:27:07.834 [2024-11-18 13:10:05.365078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.834 [2024-11-18 13:10:05.365137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.834 [2024-11-18 13:10:05.365151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.834 [2024-11-18 13:10:05.365159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.834 [2024-11-18 13:10:05.365166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.834 [2024-11-18 13:10:05.365181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.834 qpair failed and we were unable to recover it. 00:27:07.834 [2024-11-18 13:10:05.375131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.834 [2024-11-18 13:10:05.375191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.834 [2024-11-18 13:10:05.375205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.834 [2024-11-18 13:10:05.375216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.834 [2024-11-18 13:10:05.375223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.834 [2024-11-18 13:10:05.375238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.834 qpair failed and we were unable to recover it. 00:27:07.834 [2024-11-18 13:10:05.385148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.834 [2024-11-18 13:10:05.385226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.834 [2024-11-18 13:10:05.385240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.834 [2024-11-18 13:10:05.385248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.834 [2024-11-18 13:10:05.385254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.834 [2024-11-18 13:10:05.385269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.834 qpair failed and we were unable to recover it. 00:27:07.834 [2024-11-18 13:10:05.395216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.834 [2024-11-18 13:10:05.395270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.834 [2024-11-18 13:10:05.395285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.834 [2024-11-18 13:10:05.395292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.834 [2024-11-18 13:10:05.395299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.834 [2024-11-18 13:10:05.395315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.834 qpair failed and we were unable to recover it. 00:27:07.834 [2024-11-18 13:10:05.405223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.835 [2024-11-18 13:10:05.405285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.835 [2024-11-18 13:10:05.405299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.835 [2024-11-18 13:10:05.405308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.835 [2024-11-18 13:10:05.405314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.835 [2024-11-18 13:10:05.405330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.835 qpair failed and we were unable to recover it. 00:27:07.835 [2024-11-18 13:10:05.415318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.835 [2024-11-18 13:10:05.415396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.835 [2024-11-18 13:10:05.415410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.835 [2024-11-18 13:10:05.415418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.835 [2024-11-18 13:10:05.415424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.835 [2024-11-18 13:10:05.415443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.835 qpair failed and we were unable to recover it. 00:27:07.835 [2024-11-18 13:10:05.425266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.835 [2024-11-18 13:10:05.425317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.835 [2024-11-18 13:10:05.425331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.835 [2024-11-18 13:10:05.425338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.835 [2024-11-18 13:10:05.425345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.835 [2024-11-18 13:10:05.425364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.835 qpair failed and we were unable to recover it. 00:27:07.835 [2024-11-18 13:10:05.435296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.835 [2024-11-18 13:10:05.435353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.835 [2024-11-18 13:10:05.435367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.835 [2024-11-18 13:10:05.435375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.835 [2024-11-18 13:10:05.435382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.835 [2024-11-18 13:10:05.435397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.835 qpair failed and we were unable to recover it. 00:27:07.835 [2024-11-18 13:10:05.445323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.835 [2024-11-18 13:10:05.445381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.835 [2024-11-18 13:10:05.445395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.835 [2024-11-18 13:10:05.445403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.835 [2024-11-18 13:10:05.445410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.835 [2024-11-18 13:10:05.445426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.835 qpair failed and we were unable to recover it. 00:27:07.835 [2024-11-18 13:10:05.455363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.835 [2024-11-18 13:10:05.455422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.835 [2024-11-18 13:10:05.455436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.835 [2024-11-18 13:10:05.455444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.835 [2024-11-18 13:10:05.455451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.835 [2024-11-18 13:10:05.455467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.835 qpair failed and we were unable to recover it. 00:27:07.835 [2024-11-18 13:10:05.465381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.835 [2024-11-18 13:10:05.465449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.835 [2024-11-18 13:10:05.465464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.835 [2024-11-18 13:10:05.465472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.835 [2024-11-18 13:10:05.465479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.835 [2024-11-18 13:10:05.465496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.835 qpair failed and we were unable to recover it. 00:27:07.835 [2024-11-18 13:10:05.475410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.835 [2024-11-18 13:10:05.475465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.835 [2024-11-18 13:10:05.475479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.835 [2024-11-18 13:10:05.475487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.835 [2024-11-18 13:10:05.475493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.835 [2024-11-18 13:10:05.475508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.835 qpair failed and we were unable to recover it. 00:27:07.835 [2024-11-18 13:10:05.485441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.835 [2024-11-18 13:10:05.485495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.835 [2024-11-18 13:10:05.485508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.835 [2024-11-18 13:10:05.485515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.835 [2024-11-18 13:10:05.485522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.835 [2024-11-18 13:10:05.485537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.835 qpair failed and we were unable to recover it. 00:27:07.835 [2024-11-18 13:10:05.495478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.835 [2024-11-18 13:10:05.495547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.835 [2024-11-18 13:10:05.495560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.835 [2024-11-18 13:10:05.495568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.835 [2024-11-18 13:10:05.495575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.835 [2024-11-18 13:10:05.495590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.835 qpair failed and we were unable to recover it. 00:27:07.835 [2024-11-18 13:10:05.505533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.835 [2024-11-18 13:10:05.505588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.835 [2024-11-18 13:10:05.505602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.835 [2024-11-18 13:10:05.505614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.835 [2024-11-18 13:10:05.505621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.835 [2024-11-18 13:10:05.505636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.835 qpair failed and we were unable to recover it. 00:27:07.835 [2024-11-18 13:10:05.515527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.835 [2024-11-18 13:10:05.515582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.835 [2024-11-18 13:10:05.515596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.835 [2024-11-18 13:10:05.515604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.835 [2024-11-18 13:10:05.515611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.835 [2024-11-18 13:10:05.515627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.835 qpair failed and we were unable to recover it. 00:27:07.835 [2024-11-18 13:10:05.525576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.835 [2024-11-18 13:10:05.525629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.835 [2024-11-18 13:10:05.525645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.835 [2024-11-18 13:10:05.525653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.835 [2024-11-18 13:10:05.525659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:07.835 [2024-11-18 13:10:05.525675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.835 qpair failed and we were unable to recover it. 00:27:08.096 [2024-11-18 13:10:05.535607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.096 [2024-11-18 13:10:05.535675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.096 [2024-11-18 13:10:05.535690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.096 [2024-11-18 13:10:05.535697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.096 [2024-11-18 13:10:05.535704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.096 [2024-11-18 13:10:05.535719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.096 qpair failed and we were unable to recover it. 00:27:08.096 [2024-11-18 13:10:05.545626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.096 [2024-11-18 13:10:05.545680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.096 [2024-11-18 13:10:05.545694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.096 [2024-11-18 13:10:05.545702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.096 [2024-11-18 13:10:05.545709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.096 [2024-11-18 13:10:05.545728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.096 qpair failed and we were unable to recover it. 00:27:08.097 [2024-11-18 13:10:05.555631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.097 [2024-11-18 13:10:05.555706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.097 [2024-11-18 13:10:05.555720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.097 [2024-11-18 13:10:05.555727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.097 [2024-11-18 13:10:05.555733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.097 [2024-11-18 13:10:05.555749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.097 qpair failed and we were unable to recover it. 00:27:08.097 [2024-11-18 13:10:05.565627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.097 [2024-11-18 13:10:05.565706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.097 [2024-11-18 13:10:05.565720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.097 [2024-11-18 13:10:05.565728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.097 [2024-11-18 13:10:05.565734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.097 [2024-11-18 13:10:05.565750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.097 qpair failed and we were unable to recover it. 00:27:08.097 [2024-11-18 13:10:05.575693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.097 [2024-11-18 13:10:05.575748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.097 [2024-11-18 13:10:05.575764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.097 [2024-11-18 13:10:05.575772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.097 [2024-11-18 13:10:05.575780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.097 [2024-11-18 13:10:05.575796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.097 qpair failed and we were unable to recover it. 00:27:08.097 [2024-11-18 13:10:05.585729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.097 [2024-11-18 13:10:05.585799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.097 [2024-11-18 13:10:05.585813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.097 [2024-11-18 13:10:05.585820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.097 [2024-11-18 13:10:05.585826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.097 [2024-11-18 13:10:05.585842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.097 qpair failed and we were unable to recover it. 00:27:08.097 [2024-11-18 13:10:05.595733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.097 [2024-11-18 13:10:05.595786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.097 [2024-11-18 13:10:05.595799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.097 [2024-11-18 13:10:05.595806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.097 [2024-11-18 13:10:05.595813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.097 [2024-11-18 13:10:05.595829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.097 qpair failed and we were unable to recover it. 00:27:08.097 [2024-11-18 13:10:05.605813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.097 [2024-11-18 13:10:05.605895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.097 [2024-11-18 13:10:05.605909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.097 [2024-11-18 13:10:05.605917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.097 [2024-11-18 13:10:05.605923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.097 [2024-11-18 13:10:05.605938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.097 qpair failed and we were unable to recover it. 00:27:08.097 [2024-11-18 13:10:05.615792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.097 [2024-11-18 13:10:05.615847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.097 [2024-11-18 13:10:05.615861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.097 [2024-11-18 13:10:05.615868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.097 [2024-11-18 13:10:05.615875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.097 [2024-11-18 13:10:05.615890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.097 qpair failed and we were unable to recover it. 00:27:08.097 [2024-11-18 13:10:05.625794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.097 [2024-11-18 13:10:05.625868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.097 [2024-11-18 13:10:05.625883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.097 [2024-11-18 13:10:05.625890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.097 [2024-11-18 13:10:05.625896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.097 [2024-11-18 13:10:05.625911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.097 qpair failed and we were unable to recover it. 00:27:08.097 [2024-11-18 13:10:05.635939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.097 [2024-11-18 13:10:05.635995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.097 [2024-11-18 13:10:05.636013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.097 [2024-11-18 13:10:05.636020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.097 [2024-11-18 13:10:05.636027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.097 [2024-11-18 13:10:05.636042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.097 qpair failed and we were unable to recover it. 00:27:08.097 [2024-11-18 13:10:05.645926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.097 [2024-11-18 13:10:05.645979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.097 [2024-11-18 13:10:05.645994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.097 [2024-11-18 13:10:05.646001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.097 [2024-11-18 13:10:05.646008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.097 [2024-11-18 13:10:05.646024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.097 qpair failed and we were unable to recover it. 00:27:08.097 [2024-11-18 13:10:05.655967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.097 [2024-11-18 13:10:05.656045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.097 [2024-11-18 13:10:05.656060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.097 [2024-11-18 13:10:05.656068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.097 [2024-11-18 13:10:05.656074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.097 [2024-11-18 13:10:05.656089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.097 qpair failed and we were unable to recover it. 00:27:08.097 [2024-11-18 13:10:05.665982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.097 [2024-11-18 13:10:05.666040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.097 [2024-11-18 13:10:05.666055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.097 [2024-11-18 13:10:05.666063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.097 [2024-11-18 13:10:05.666071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.097 [2024-11-18 13:10:05.666086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.097 qpair failed and we were unable to recover it. 00:27:08.097 [2024-11-18 13:10:05.675972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.097 [2024-11-18 13:10:05.676076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.097 [2024-11-18 13:10:05.676089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.097 [2024-11-18 13:10:05.676096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.097 [2024-11-18 13:10:05.676107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.098 [2024-11-18 13:10:05.676123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.098 qpair failed and we were unable to recover it. 00:27:08.098 [2024-11-18 13:10:05.685995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.098 [2024-11-18 13:10:05.686069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.098 [2024-11-18 13:10:05.686083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.098 [2024-11-18 13:10:05.686090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.098 [2024-11-18 13:10:05.686096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.098 [2024-11-18 13:10:05.686112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.098 qpair failed and we were unable to recover it. 00:27:08.098 [2024-11-18 13:10:05.696002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.098 [2024-11-18 13:10:05.696078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.098 [2024-11-18 13:10:05.696092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.098 [2024-11-18 13:10:05.696100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.098 [2024-11-18 13:10:05.696107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.098 [2024-11-18 13:10:05.696122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.098 qpair failed and we were unable to recover it. 00:27:08.098 [2024-11-18 13:10:05.706101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.098 [2024-11-18 13:10:05.706157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.098 [2024-11-18 13:10:05.706171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.098 [2024-11-18 13:10:05.706179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.098 [2024-11-18 13:10:05.706185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.098 [2024-11-18 13:10:05.706201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.098 qpair failed and we were unable to recover it. 00:27:08.098 [2024-11-18 13:10:05.716117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.098 [2024-11-18 13:10:05.716204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.098 [2024-11-18 13:10:05.716218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.098 [2024-11-18 13:10:05.716225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.098 [2024-11-18 13:10:05.716232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.098 [2024-11-18 13:10:05.716247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.098 qpair failed and we were unable to recover it. 00:27:08.098 [2024-11-18 13:10:05.726132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.098 [2024-11-18 13:10:05.726190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.098 [2024-11-18 13:10:05.726204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.098 [2024-11-18 13:10:05.726212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.098 [2024-11-18 13:10:05.726219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.098 [2024-11-18 13:10:05.726235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.098 qpair failed and we were unable to recover it. 00:27:08.098 [2024-11-18 13:10:05.736203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.098 [2024-11-18 13:10:05.736259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.098 [2024-11-18 13:10:05.736273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.098 [2024-11-18 13:10:05.736280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.098 [2024-11-18 13:10:05.736287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.098 [2024-11-18 13:10:05.736302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.098 qpair failed and we were unable to recover it. 00:27:08.098 [2024-11-18 13:10:05.746253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.098 [2024-11-18 13:10:05.746313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.098 [2024-11-18 13:10:05.746327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.098 [2024-11-18 13:10:05.746334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.098 [2024-11-18 13:10:05.746341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.098 [2024-11-18 13:10:05.746359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.098 qpair failed and we were unable to recover it. 00:27:08.098 [2024-11-18 13:10:05.756226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.098 [2024-11-18 13:10:05.756284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.098 [2024-11-18 13:10:05.756299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.098 [2024-11-18 13:10:05.756306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.098 [2024-11-18 13:10:05.756313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.098 [2024-11-18 13:10:05.756329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.098 qpair failed and we were unable to recover it. 00:27:08.098 [2024-11-18 13:10:05.766302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.098 [2024-11-18 13:10:05.766361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.098 [2024-11-18 13:10:05.766379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.098 [2024-11-18 13:10:05.766386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.098 [2024-11-18 13:10:05.766393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.098 [2024-11-18 13:10:05.766409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.098 qpair failed and we were unable to recover it. 00:27:08.098 [2024-11-18 13:10:05.776224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.098 [2024-11-18 13:10:05.776313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.098 [2024-11-18 13:10:05.776327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.098 [2024-11-18 13:10:05.776335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.098 [2024-11-18 13:10:05.776341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.098 [2024-11-18 13:10:05.776360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.098 qpair failed and we were unable to recover it. 00:27:08.098 [2024-11-18 13:10:05.786311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.098 [2024-11-18 13:10:05.786369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.098 [2024-11-18 13:10:05.786383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.098 [2024-11-18 13:10:05.786391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.098 [2024-11-18 13:10:05.786398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.098 [2024-11-18 13:10:05.786414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.098 qpair failed and we were unable to recover it. 00:27:08.359 [2024-11-18 13:10:05.796347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.359 [2024-11-18 13:10:05.796409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.359 [2024-11-18 13:10:05.796423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.359 [2024-11-18 13:10:05.796430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.359 [2024-11-18 13:10:05.796437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.359 [2024-11-18 13:10:05.796453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-11-18 13:10:05.806371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.359 [2024-11-18 13:10:05.806477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.359 [2024-11-18 13:10:05.806492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.359 [2024-11-18 13:10:05.806499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.359 [2024-11-18 13:10:05.806511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.359 [2024-11-18 13:10:05.806527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-11-18 13:10:05.816438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.359 [2024-11-18 13:10:05.816494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.359 [2024-11-18 13:10:05.816509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.360 [2024-11-18 13:10:05.816516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.360 [2024-11-18 13:10:05.816522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.360 [2024-11-18 13:10:05.816538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-11-18 13:10:05.826512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.360 [2024-11-18 13:10:05.826571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.360 [2024-11-18 13:10:05.826585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.360 [2024-11-18 13:10:05.826593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.360 [2024-11-18 13:10:05.826600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.360 [2024-11-18 13:10:05.826616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-11-18 13:10:05.836461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.360 [2024-11-18 13:10:05.836522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.360 [2024-11-18 13:10:05.836536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.360 [2024-11-18 13:10:05.836544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.360 [2024-11-18 13:10:05.836551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.360 [2024-11-18 13:10:05.836566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-11-18 13:10:05.846523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.360 [2024-11-18 13:10:05.846580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.360 [2024-11-18 13:10:05.846594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.360 [2024-11-18 13:10:05.846602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.360 [2024-11-18 13:10:05.846608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.360 [2024-11-18 13:10:05.846624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-11-18 13:10:05.856513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.360 [2024-11-18 13:10:05.856571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.360 [2024-11-18 13:10:05.856585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.360 [2024-11-18 13:10:05.856592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.360 [2024-11-18 13:10:05.856599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.360 [2024-11-18 13:10:05.856615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-11-18 13:10:05.866572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.360 [2024-11-18 13:10:05.866635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.360 [2024-11-18 13:10:05.866649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.360 [2024-11-18 13:10:05.866657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.360 [2024-11-18 13:10:05.866663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.360 [2024-11-18 13:10:05.866679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-11-18 13:10:05.876567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.360 [2024-11-18 13:10:05.876622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.360 [2024-11-18 13:10:05.876636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.360 [2024-11-18 13:10:05.876643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.360 [2024-11-18 13:10:05.876650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.360 [2024-11-18 13:10:05.876666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-11-18 13:10:05.886601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.360 [2024-11-18 13:10:05.886656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.360 [2024-11-18 13:10:05.886670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.360 [2024-11-18 13:10:05.886678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.360 [2024-11-18 13:10:05.886685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.360 [2024-11-18 13:10:05.886700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-11-18 13:10:05.896582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.360 [2024-11-18 13:10:05.896636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.360 [2024-11-18 13:10:05.896653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.360 [2024-11-18 13:10:05.896660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.360 [2024-11-18 13:10:05.896667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.360 [2024-11-18 13:10:05.896683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-11-18 13:10:05.906658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.360 [2024-11-18 13:10:05.906713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.360 [2024-11-18 13:10:05.906728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.360 [2024-11-18 13:10:05.906735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.360 [2024-11-18 13:10:05.906742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.360 [2024-11-18 13:10:05.906757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-11-18 13:10:05.916642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.360 [2024-11-18 13:10:05.916716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.360 [2024-11-18 13:10:05.916730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.360 [2024-11-18 13:10:05.916737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.360 [2024-11-18 13:10:05.916744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.360 [2024-11-18 13:10:05.916759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-11-18 13:10:05.926645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.360 [2024-11-18 13:10:05.926695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.360 [2024-11-18 13:10:05.926709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.360 [2024-11-18 13:10:05.926716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.360 [2024-11-18 13:10:05.926722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.360 [2024-11-18 13:10:05.926738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-11-18 13:10:05.936687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.360 [2024-11-18 13:10:05.936743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.360 [2024-11-18 13:10:05.936757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.360 [2024-11-18 13:10:05.936768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.360 [2024-11-18 13:10:05.936775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.360 [2024-11-18 13:10:05.936791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-11-18 13:10:05.946777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.360 [2024-11-18 13:10:05.946836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.360 [2024-11-18 13:10:05.946850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.361 [2024-11-18 13:10:05.946857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.361 [2024-11-18 13:10:05.946864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.361 [2024-11-18 13:10:05.946879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-11-18 13:10:05.956789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.361 [2024-11-18 13:10:05.956843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.361 [2024-11-18 13:10:05.956856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.361 [2024-11-18 13:10:05.956864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.361 [2024-11-18 13:10:05.956870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.361 [2024-11-18 13:10:05.956886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-11-18 13:10:05.966816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.361 [2024-11-18 13:10:05.966872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.361 [2024-11-18 13:10:05.966887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.361 [2024-11-18 13:10:05.966895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.361 [2024-11-18 13:10:05.966902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.361 [2024-11-18 13:10:05.966918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-11-18 13:10:05.976859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.361 [2024-11-18 13:10:05.976916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.361 [2024-11-18 13:10:05.976930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.361 [2024-11-18 13:10:05.976937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.361 [2024-11-18 13:10:05.976944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.361 [2024-11-18 13:10:05.976965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-11-18 13:10:05.986885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.361 [2024-11-18 13:10:05.986940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.361 [2024-11-18 13:10:05.986953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.361 [2024-11-18 13:10:05.986960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.361 [2024-11-18 13:10:05.986967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.361 [2024-11-18 13:10:05.986983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-11-18 13:10:05.996910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.361 [2024-11-18 13:10:05.996962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.361 [2024-11-18 13:10:05.996976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.361 [2024-11-18 13:10:05.996983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.361 [2024-11-18 13:10:05.996990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.361 [2024-11-18 13:10:05.997006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-11-18 13:10:06.006938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.361 [2024-11-18 13:10:06.006987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.361 [2024-11-18 13:10:06.007001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.361 [2024-11-18 13:10:06.007008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.361 [2024-11-18 13:10:06.007015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.361 [2024-11-18 13:10:06.007031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-11-18 13:10:06.017002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.361 [2024-11-18 13:10:06.017108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.361 [2024-11-18 13:10:06.017124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.361 [2024-11-18 13:10:06.017131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.361 [2024-11-18 13:10:06.017138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.361 [2024-11-18 13:10:06.017154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-11-18 13:10:06.027016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.361 [2024-11-18 13:10:06.027090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.361 [2024-11-18 13:10:06.027105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.361 [2024-11-18 13:10:06.027113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.361 [2024-11-18 13:10:06.027119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.361 [2024-11-18 13:10:06.027134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-11-18 13:10:06.037049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.361 [2024-11-18 13:10:06.037110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.361 [2024-11-18 13:10:06.037124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.361 [2024-11-18 13:10:06.037132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.361 [2024-11-18 13:10:06.037138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.361 [2024-11-18 13:10:06.037153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-11-18 13:10:06.047073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.361 [2024-11-18 13:10:06.047149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.361 [2024-11-18 13:10:06.047164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.361 [2024-11-18 13:10:06.047171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.361 [2024-11-18 13:10:06.047178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.361 [2024-11-18 13:10:06.047194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.623 [2024-11-18 13:10:06.057028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.623 [2024-11-18 13:10:06.057083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.623 [2024-11-18 13:10:06.057097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.623 [2024-11-18 13:10:06.057104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.623 [2024-11-18 13:10:06.057112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.623 [2024-11-18 13:10:06.057127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-11-18 13:10:06.067085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.623 [2024-11-18 13:10:06.067142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.623 [2024-11-18 13:10:06.067156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.623 [2024-11-18 13:10:06.067166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.623 [2024-11-18 13:10:06.067173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.623 [2024-11-18 13:10:06.067188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-11-18 13:10:06.077143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.623 [2024-11-18 13:10:06.077198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.623 [2024-11-18 13:10:06.077212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.623 [2024-11-18 13:10:06.077219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.623 [2024-11-18 13:10:06.077226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.623 [2024-11-18 13:10:06.077243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-11-18 13:10:06.087164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.623 [2024-11-18 13:10:06.087219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.623 [2024-11-18 13:10:06.087232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.623 [2024-11-18 13:10:06.087239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.623 [2024-11-18 13:10:06.087246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.623 [2024-11-18 13:10:06.087261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-11-18 13:10:06.097143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.623 [2024-11-18 13:10:06.097201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.623 [2024-11-18 13:10:06.097215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.623 [2024-11-18 13:10:06.097222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.623 [2024-11-18 13:10:06.097229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.623 [2024-11-18 13:10:06.097244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-11-18 13:10:06.107226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.623 [2024-11-18 13:10:06.107284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.623 [2024-11-18 13:10:06.107298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.623 [2024-11-18 13:10:06.107306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.623 [2024-11-18 13:10:06.107313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.623 [2024-11-18 13:10:06.107331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-11-18 13:10:06.117229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.623 [2024-11-18 13:10:06.117283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.623 [2024-11-18 13:10:06.117297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.623 [2024-11-18 13:10:06.117304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.623 [2024-11-18 13:10:06.117311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.623 [2024-11-18 13:10:06.117327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-11-18 13:10:06.127276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.623 [2024-11-18 13:10:06.127330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.623 [2024-11-18 13:10:06.127344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.623 [2024-11-18 13:10:06.127354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.623 [2024-11-18 13:10:06.127361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.623 [2024-11-18 13:10:06.127378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-11-18 13:10:06.137320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.623 [2024-11-18 13:10:06.137389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.623 [2024-11-18 13:10:06.137403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.623 [2024-11-18 13:10:06.137410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.623 [2024-11-18 13:10:06.137416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.623 [2024-11-18 13:10:06.137431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-11-18 13:10:06.147344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.623 [2024-11-18 13:10:06.147431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.623 [2024-11-18 13:10:06.147445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.623 [2024-11-18 13:10:06.147453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.623 [2024-11-18 13:10:06.147459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.623 [2024-11-18 13:10:06.147474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-11-18 13:10:06.157372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.623 [2024-11-18 13:10:06.157431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.623 [2024-11-18 13:10:06.157445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.623 [2024-11-18 13:10:06.157452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.623 [2024-11-18 13:10:06.157459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.623 [2024-11-18 13:10:06.157474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-11-18 13:10:06.167399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.623 [2024-11-18 13:10:06.167469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.623 [2024-11-18 13:10:06.167483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.623 [2024-11-18 13:10:06.167490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.623 [2024-11-18 13:10:06.167497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.623 [2024-11-18 13:10:06.167513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.623 qpair failed and we were unable to recover it. 00:27:08.623 [2024-11-18 13:10:06.177434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.624 [2024-11-18 13:10:06.177492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.624 [2024-11-18 13:10:06.177506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.624 [2024-11-18 13:10:06.177514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.624 [2024-11-18 13:10:06.177522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.624 [2024-11-18 13:10:06.177538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-11-18 13:10:06.187456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.624 [2024-11-18 13:10:06.187510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.624 [2024-11-18 13:10:06.187523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.624 [2024-11-18 13:10:06.187531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.624 [2024-11-18 13:10:06.187538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.624 [2024-11-18 13:10:06.187552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-11-18 13:10:06.197486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.624 [2024-11-18 13:10:06.197539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.624 [2024-11-18 13:10:06.197555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.624 [2024-11-18 13:10:06.197563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.624 [2024-11-18 13:10:06.197570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.624 [2024-11-18 13:10:06.197585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-11-18 13:10:06.207510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.624 [2024-11-18 13:10:06.207567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.624 [2024-11-18 13:10:06.207581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.624 [2024-11-18 13:10:06.207588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.624 [2024-11-18 13:10:06.207595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.624 [2024-11-18 13:10:06.207610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-11-18 13:10:06.217552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.624 [2024-11-18 13:10:06.217630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.624 [2024-11-18 13:10:06.217644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.624 [2024-11-18 13:10:06.217651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.624 [2024-11-18 13:10:06.217657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.624 [2024-11-18 13:10:06.217673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-11-18 13:10:06.227568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.624 [2024-11-18 13:10:06.227626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.624 [2024-11-18 13:10:06.227640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.624 [2024-11-18 13:10:06.227648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.624 [2024-11-18 13:10:06.227654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.624 [2024-11-18 13:10:06.227670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-11-18 13:10:06.237596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.624 [2024-11-18 13:10:06.237654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.624 [2024-11-18 13:10:06.237668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.624 [2024-11-18 13:10:06.237675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.624 [2024-11-18 13:10:06.237685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.624 [2024-11-18 13:10:06.237701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-11-18 13:10:06.247589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.624 [2024-11-18 13:10:06.247643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.624 [2024-11-18 13:10:06.247657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.624 [2024-11-18 13:10:06.247664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.624 [2024-11-18 13:10:06.247671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.624 [2024-11-18 13:10:06.247687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-11-18 13:10:06.257655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.624 [2024-11-18 13:10:06.257711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.624 [2024-11-18 13:10:06.257725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.624 [2024-11-18 13:10:06.257732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.624 [2024-11-18 13:10:06.257739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.624 [2024-11-18 13:10:06.257754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-11-18 13:10:06.267666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.624 [2024-11-18 13:10:06.267723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.624 [2024-11-18 13:10:06.267737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.624 [2024-11-18 13:10:06.267744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.624 [2024-11-18 13:10:06.267751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.624 [2024-11-18 13:10:06.267767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-11-18 13:10:06.277697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.624 [2024-11-18 13:10:06.277753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.624 [2024-11-18 13:10:06.277767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.624 [2024-11-18 13:10:06.277774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.624 [2024-11-18 13:10:06.277781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.624 [2024-11-18 13:10:06.277796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-11-18 13:10:06.287706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.624 [2024-11-18 13:10:06.287759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.624 [2024-11-18 13:10:06.287774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.624 [2024-11-18 13:10:06.287782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.624 [2024-11-18 13:10:06.287788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.624 [2024-11-18 13:10:06.287804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-11-18 13:10:06.297677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.624 [2024-11-18 13:10:06.297736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.624 [2024-11-18 13:10:06.297751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.624 [2024-11-18 13:10:06.297758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.624 [2024-11-18 13:10:06.297765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.624 [2024-11-18 13:10:06.297779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.624 qpair failed and we were unable to recover it. 00:27:08.624 [2024-11-18 13:10:06.307793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.625 [2024-11-18 13:10:06.307850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.625 [2024-11-18 13:10:06.307864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.625 [2024-11-18 13:10:06.307871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.625 [2024-11-18 13:10:06.307878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.625 [2024-11-18 13:10:06.307894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.625 [2024-11-18 13:10:06.317815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.625 [2024-11-18 13:10:06.317885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.625 [2024-11-18 13:10:06.317899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.625 [2024-11-18 13:10:06.317907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.625 [2024-11-18 13:10:06.317913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.625 [2024-11-18 13:10:06.317928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.625 qpair failed and we were unable to recover it. 00:27:08.886 [2024-11-18 13:10:06.327858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.886 [2024-11-18 13:10:06.327937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.886 [2024-11-18 13:10:06.327954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.886 [2024-11-18 13:10:06.327962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.886 [2024-11-18 13:10:06.327969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.886 [2024-11-18 13:10:06.327984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.886 qpair failed and we were unable to recover it. 00:27:08.886 [2024-11-18 13:10:06.337809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.886 [2024-11-18 13:10:06.337893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.886 [2024-11-18 13:10:06.337908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.886 [2024-11-18 13:10:06.337915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.886 [2024-11-18 13:10:06.337921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.886 [2024-11-18 13:10:06.337937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.886 qpair failed and we were unable to recover it. 00:27:08.886 [2024-11-18 13:10:06.347939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.886 [2024-11-18 13:10:06.347999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.886 [2024-11-18 13:10:06.348013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.886 [2024-11-18 13:10:06.348020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.886 [2024-11-18 13:10:06.348027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.886 [2024-11-18 13:10:06.348043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.886 qpair failed and we were unable to recover it. 00:27:08.886 [2024-11-18 13:10:06.357924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.886 [2024-11-18 13:10:06.358003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.886 [2024-11-18 13:10:06.358019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.886 [2024-11-18 13:10:06.358027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.886 [2024-11-18 13:10:06.358034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.886 [2024-11-18 13:10:06.358050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.886 qpair failed and we were unable to recover it. 00:27:08.886 [2024-11-18 13:10:06.367907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.886 [2024-11-18 13:10:06.367965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.886 [2024-11-18 13:10:06.367979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.886 [2024-11-18 13:10:06.367987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.886 [2024-11-18 13:10:06.367996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.886 [2024-11-18 13:10:06.368012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.886 qpair failed and we were unable to recover it. 00:27:08.886 [2024-11-18 13:10:06.377974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.886 [2024-11-18 13:10:06.378075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.886 [2024-11-18 13:10:06.378089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.886 [2024-11-18 13:10:06.378096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.886 [2024-11-18 13:10:06.378103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.886 [2024-11-18 13:10:06.378118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.886 qpair failed and we were unable to recover it. 00:27:08.886 [2024-11-18 13:10:06.387988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.886 [2024-11-18 13:10:06.388044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.886 [2024-11-18 13:10:06.388059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.886 [2024-11-18 13:10:06.388066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.886 [2024-11-18 13:10:06.388073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.886 [2024-11-18 13:10:06.388089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.886 qpair failed and we were unable to recover it. 00:27:08.886 [2024-11-18 13:10:06.398024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.886 [2024-11-18 13:10:06.398077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.886 [2024-11-18 13:10:06.398091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.886 [2024-11-18 13:10:06.398098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.886 [2024-11-18 13:10:06.398105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.886 [2024-11-18 13:10:06.398120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.886 qpair failed and we were unable to recover it. 00:27:08.886 [2024-11-18 13:10:06.408076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.886 [2024-11-18 13:10:06.408135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.886 [2024-11-18 13:10:06.408149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.886 [2024-11-18 13:10:06.408156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.886 [2024-11-18 13:10:06.408163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.886 [2024-11-18 13:10:06.408178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.886 qpair failed and we were unable to recover it. 00:27:08.886 [2024-11-18 13:10:06.418090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.887 [2024-11-18 13:10:06.418145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.887 [2024-11-18 13:10:06.418159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.887 [2024-11-18 13:10:06.418167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.887 [2024-11-18 13:10:06.418173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.887 [2024-11-18 13:10:06.418190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.887 qpair failed and we were unable to recover it. 00:27:08.887 [2024-11-18 13:10:06.428111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.887 [2024-11-18 13:10:06.428190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.887 [2024-11-18 13:10:06.428204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.887 [2024-11-18 13:10:06.428211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.887 [2024-11-18 13:10:06.428218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.887 [2024-11-18 13:10:06.428233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.887 qpair failed and we were unable to recover it. 00:27:08.887 [2024-11-18 13:10:06.438144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.887 [2024-11-18 13:10:06.438200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.887 [2024-11-18 13:10:06.438214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.887 [2024-11-18 13:10:06.438222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.887 [2024-11-18 13:10:06.438228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.887 [2024-11-18 13:10:06.438244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.887 qpair failed and we were unable to recover it. 00:27:08.887 [2024-11-18 13:10:06.448163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.887 [2024-11-18 13:10:06.448212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.887 [2024-11-18 13:10:06.448225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.887 [2024-11-18 13:10:06.448232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.887 [2024-11-18 13:10:06.448239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.887 [2024-11-18 13:10:06.448254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.887 qpair failed and we were unable to recover it. 00:27:08.887 [2024-11-18 13:10:06.458135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.887 [2024-11-18 13:10:06.458200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.887 [2024-11-18 13:10:06.458217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.887 [2024-11-18 13:10:06.458225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.887 [2024-11-18 13:10:06.458231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.887 [2024-11-18 13:10:06.458246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.887 qpair failed and we were unable to recover it. 00:27:08.887 [2024-11-18 13:10:06.468329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.887 [2024-11-18 13:10:06.468400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.887 [2024-11-18 13:10:06.468414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.887 [2024-11-18 13:10:06.468421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.887 [2024-11-18 13:10:06.468428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.887 [2024-11-18 13:10:06.468444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.887 qpair failed and we were unable to recover it. 00:27:08.887 [2024-11-18 13:10:06.478302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.887 [2024-11-18 13:10:06.478362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.887 [2024-11-18 13:10:06.478376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.887 [2024-11-18 13:10:06.478383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.887 [2024-11-18 13:10:06.478390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.887 [2024-11-18 13:10:06.478405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.887 qpair failed and we were unable to recover it. 00:27:08.887 [2024-11-18 13:10:06.488256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.887 [2024-11-18 13:10:06.488321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.887 [2024-11-18 13:10:06.488334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.887 [2024-11-18 13:10:06.488341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.887 [2024-11-18 13:10:06.488348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.887 [2024-11-18 13:10:06.488368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.887 qpair failed and we were unable to recover it. 00:27:08.887 [2024-11-18 13:10:06.498363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.887 [2024-11-18 13:10:06.498421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.887 [2024-11-18 13:10:06.498435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.887 [2024-11-18 13:10:06.498445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.887 [2024-11-18 13:10:06.498451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.887 [2024-11-18 13:10:06.498467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.887 qpair failed and we were unable to recover it. 00:27:08.887 [2024-11-18 13:10:06.508346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.887 [2024-11-18 13:10:06.508413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.887 [2024-11-18 13:10:06.508427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.887 [2024-11-18 13:10:06.508436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.887 [2024-11-18 13:10:06.508443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.887 [2024-11-18 13:10:06.508458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.887 qpair failed and we were unable to recover it. 00:27:08.887 [2024-11-18 13:10:06.518369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.887 [2024-11-18 13:10:06.518425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.887 [2024-11-18 13:10:06.518439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.887 [2024-11-18 13:10:06.518447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.887 [2024-11-18 13:10:06.518454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.887 [2024-11-18 13:10:06.518470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.887 qpair failed and we were unable to recover it. 00:27:08.887 [2024-11-18 13:10:06.528440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.887 [2024-11-18 13:10:06.528525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.887 [2024-11-18 13:10:06.528539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.887 [2024-11-18 13:10:06.528546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.887 [2024-11-18 13:10:06.528553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.887 [2024-11-18 13:10:06.528568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.887 qpair failed and we were unable to recover it. 00:27:08.887 [2024-11-18 13:10:06.538425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.887 [2024-11-18 13:10:06.538482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.887 [2024-11-18 13:10:06.538495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.887 [2024-11-18 13:10:06.538504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.887 [2024-11-18 13:10:06.538511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.887 [2024-11-18 13:10:06.538529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.887 qpair failed and we were unable to recover it. 00:27:08.887 [2024-11-18 13:10:06.548456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.888 [2024-11-18 13:10:06.548511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.888 [2024-11-18 13:10:06.548525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.888 [2024-11-18 13:10:06.548532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.888 [2024-11-18 13:10:06.548539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.888 [2024-11-18 13:10:06.548555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.888 qpair failed and we were unable to recover it. 00:27:08.888 [2024-11-18 13:10:06.558483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.888 [2024-11-18 13:10:06.558541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.888 [2024-11-18 13:10:06.558554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.888 [2024-11-18 13:10:06.558562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.888 [2024-11-18 13:10:06.558568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.888 [2024-11-18 13:10:06.558583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.888 qpair failed and we were unable to recover it. 00:27:08.888 [2024-11-18 13:10:06.568508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.888 [2024-11-18 13:10:06.568600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.888 [2024-11-18 13:10:06.568613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.888 [2024-11-18 13:10:06.568621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.888 [2024-11-18 13:10:06.568627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.888 [2024-11-18 13:10:06.568642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.888 qpair failed and we were unable to recover it. 00:27:08.888 [2024-11-18 13:10:06.578573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.888 [2024-11-18 13:10:06.578630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.888 [2024-11-18 13:10:06.578644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.888 [2024-11-18 13:10:06.578652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.888 [2024-11-18 13:10:06.578658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:08.888 [2024-11-18 13:10:06.578673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.888 qpair failed and we were unable to recover it. 00:27:09.149 [2024-11-18 13:10:06.588596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.149 [2024-11-18 13:10:06.588658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.149 [2024-11-18 13:10:06.588672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.149 [2024-11-18 13:10:06.588679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.149 [2024-11-18 13:10:06.588686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.149 [2024-11-18 13:10:06.588700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.149 qpair failed and we were unable to recover it. 00:27:09.149 [2024-11-18 13:10:06.598531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.149 [2024-11-18 13:10:06.598584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.149 [2024-11-18 13:10:06.598597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.149 [2024-11-18 13:10:06.598605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.149 [2024-11-18 13:10:06.598612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.149 [2024-11-18 13:10:06.598627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.149 qpair failed and we were unable to recover it. 00:27:09.149 [2024-11-18 13:10:06.608623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.149 [2024-11-18 13:10:06.608674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.149 [2024-11-18 13:10:06.608687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.149 [2024-11-18 13:10:06.608695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.149 [2024-11-18 13:10:06.608701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.149 [2024-11-18 13:10:06.608717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.149 qpair failed and we were unable to recover it. 00:27:09.149 [2024-11-18 13:10:06.618594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.149 [2024-11-18 13:10:06.618675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.149 [2024-11-18 13:10:06.618690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.149 [2024-11-18 13:10:06.618697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.149 [2024-11-18 13:10:06.618703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.149 [2024-11-18 13:10:06.618718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.149 qpair failed and we were unable to recover it. 00:27:09.149 [2024-11-18 13:10:06.628722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.149 [2024-11-18 13:10:06.628779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.149 [2024-11-18 13:10:06.628794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.149 [2024-11-18 13:10:06.628805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.149 [2024-11-18 13:10:06.628812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.149 [2024-11-18 13:10:06.628827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.149 qpair failed and we were unable to recover it. 00:27:09.149 [2024-11-18 13:10:06.638729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.149 [2024-11-18 13:10:06.638793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.149 [2024-11-18 13:10:06.638807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.149 [2024-11-18 13:10:06.638815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.149 [2024-11-18 13:10:06.638822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.149 [2024-11-18 13:10:06.638837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.149 qpair failed and we were unable to recover it. 00:27:09.149 [2024-11-18 13:10:06.648749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.149 [2024-11-18 13:10:06.648803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.149 [2024-11-18 13:10:06.648818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.149 [2024-11-18 13:10:06.648825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.149 [2024-11-18 13:10:06.648832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.149 [2024-11-18 13:10:06.648848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.149 qpair failed and we were unable to recover it. 00:27:09.149 [2024-11-18 13:10:06.658776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.149 [2024-11-18 13:10:06.658848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.149 [2024-11-18 13:10:06.658862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.149 [2024-11-18 13:10:06.658870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.149 [2024-11-18 13:10:06.658876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.149 [2024-11-18 13:10:06.658892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.149 qpair failed and we were unable to recover it. 00:27:09.149 [2024-11-18 13:10:06.668834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.149 [2024-11-18 13:10:06.668888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.149 [2024-11-18 13:10:06.668902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.149 [2024-11-18 13:10:06.668909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.149 [2024-11-18 13:10:06.668916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.149 [2024-11-18 13:10:06.668936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.149 qpair failed and we were unable to recover it. 00:27:09.149 [2024-11-18 13:10:06.678829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.149 [2024-11-18 13:10:06.678878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.150 [2024-11-18 13:10:06.678892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.150 [2024-11-18 13:10:06.678899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.150 [2024-11-18 13:10:06.678905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.150 [2024-11-18 13:10:06.678921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.150 qpair failed and we were unable to recover it. 00:27:09.150 [2024-11-18 13:10:06.688866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.150 [2024-11-18 13:10:06.688933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.150 [2024-11-18 13:10:06.688947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.150 [2024-11-18 13:10:06.688955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.150 [2024-11-18 13:10:06.688961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.150 [2024-11-18 13:10:06.688976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.150 qpair failed and we were unable to recover it. 00:27:09.150 [2024-11-18 13:10:06.698873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.150 [2024-11-18 13:10:06.698932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.150 [2024-11-18 13:10:06.698948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.150 [2024-11-18 13:10:06.698955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.150 [2024-11-18 13:10:06.698962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.150 [2024-11-18 13:10:06.698978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.150 qpair failed and we were unable to recover it. 00:27:09.150 [2024-11-18 13:10:06.708930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.150 [2024-11-18 13:10:06.708983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.150 [2024-11-18 13:10:06.708997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.150 [2024-11-18 13:10:06.709005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.150 [2024-11-18 13:10:06.709012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.150 [2024-11-18 13:10:06.709028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.150 qpair failed and we were unable to recover it. 00:27:09.150 [2024-11-18 13:10:06.718976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.150 [2024-11-18 13:10:06.719047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.150 [2024-11-18 13:10:06.719063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.150 [2024-11-18 13:10:06.719070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.150 [2024-11-18 13:10:06.719078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.150 [2024-11-18 13:10:06.719093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.150 qpair failed and we were unable to recover it. 00:27:09.150 [2024-11-18 13:10:06.728921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.150 [2024-11-18 13:10:06.728972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.150 [2024-11-18 13:10:06.728986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.150 [2024-11-18 13:10:06.728994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.150 [2024-11-18 13:10:06.729000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.150 [2024-11-18 13:10:06.729016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.150 qpair failed and we were unable to recover it. 00:27:09.150 [2024-11-18 13:10:06.739008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.150 [2024-11-18 13:10:06.739067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.150 [2024-11-18 13:10:06.739080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.150 [2024-11-18 13:10:06.739088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.150 [2024-11-18 13:10:06.739094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.150 [2024-11-18 13:10:06.739110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.150 qpair failed and we were unable to recover it. 00:27:09.150 [2024-11-18 13:10:06.749035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.150 [2024-11-18 13:10:06.749092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.150 [2024-11-18 13:10:06.749114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.150 [2024-11-18 13:10:06.749122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.150 [2024-11-18 13:10:06.749129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.150 [2024-11-18 13:10:06.749150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.150 qpair failed and we were unable to recover it. 00:27:09.150 [2024-11-18 13:10:06.759059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.150 [2024-11-18 13:10:06.759115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.150 [2024-11-18 13:10:06.759133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.150 [2024-11-18 13:10:06.759141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.150 [2024-11-18 13:10:06.759147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.150 [2024-11-18 13:10:06.759163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.150 qpair failed and we were unable to recover it. 00:27:09.150 [2024-11-18 13:10:06.769082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.150 [2024-11-18 13:10:06.769136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.150 [2024-11-18 13:10:06.769151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.150 [2024-11-18 13:10:06.769158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.150 [2024-11-18 13:10:06.769165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.150 [2024-11-18 13:10:06.769181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.150 qpair failed and we were unable to recover it. 00:27:09.150 [2024-11-18 13:10:06.779125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.150 [2024-11-18 13:10:06.779199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.150 [2024-11-18 13:10:06.779214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.150 [2024-11-18 13:10:06.779222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.150 [2024-11-18 13:10:06.779228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.150 [2024-11-18 13:10:06.779244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.150 qpair failed and we were unable to recover it. 00:27:09.150 [2024-11-18 13:10:06.789151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.150 [2024-11-18 13:10:06.789207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.150 [2024-11-18 13:10:06.789221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.150 [2024-11-18 13:10:06.789228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.150 [2024-11-18 13:10:06.789234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.150 [2024-11-18 13:10:06.789250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.150 qpair failed and we were unable to recover it. 00:27:09.150 [2024-11-18 13:10:06.799202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.150 [2024-11-18 13:10:06.799255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.150 [2024-11-18 13:10:06.799269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.150 [2024-11-18 13:10:06.799277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.150 [2024-11-18 13:10:06.799287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.150 [2024-11-18 13:10:06.799303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.150 qpair failed and we were unable to recover it. 00:27:09.150 [2024-11-18 13:10:06.809198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.150 [2024-11-18 13:10:06.809254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.150 [2024-11-18 13:10:06.809268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.151 [2024-11-18 13:10:06.809276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.151 [2024-11-18 13:10:06.809282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.151 [2024-11-18 13:10:06.809298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.151 qpair failed and we were unable to recover it. 00:27:09.151 [2024-11-18 13:10:06.819236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.151 [2024-11-18 13:10:06.819293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.151 [2024-11-18 13:10:06.819307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.151 [2024-11-18 13:10:06.819314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.151 [2024-11-18 13:10:06.819320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.151 [2024-11-18 13:10:06.819335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.151 qpair failed and we were unable to recover it. 00:27:09.151 [2024-11-18 13:10:06.829265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.151 [2024-11-18 13:10:06.829325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.151 [2024-11-18 13:10:06.829339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.151 [2024-11-18 13:10:06.829346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.151 [2024-11-18 13:10:06.829356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.151 [2024-11-18 13:10:06.829372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.151 qpair failed and we were unable to recover it. 00:27:09.151 [2024-11-18 13:10:06.839210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.151 [2024-11-18 13:10:06.839262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.151 [2024-11-18 13:10:06.839275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.151 [2024-11-18 13:10:06.839282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.151 [2024-11-18 13:10:06.839289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.151 [2024-11-18 13:10:06.839305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.151 qpair failed and we were unable to recover it. 00:27:09.411 [2024-11-18 13:10:06.849318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.411 [2024-11-18 13:10:06.849380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.411 [2024-11-18 13:10:06.849395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.411 [2024-11-18 13:10:06.849403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.411 [2024-11-18 13:10:06.849409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.411 [2024-11-18 13:10:06.849425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-11-18 13:10:06.859350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.411 [2024-11-18 13:10:06.859410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.411 [2024-11-18 13:10:06.859424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.411 [2024-11-18 13:10:06.859431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.411 [2024-11-18 13:10:06.859437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.411 [2024-11-18 13:10:06.859453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-11-18 13:10:06.869373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.411 [2024-11-18 13:10:06.869431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.411 [2024-11-18 13:10:06.869445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.411 [2024-11-18 13:10:06.869452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.412 [2024-11-18 13:10:06.869459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.412 [2024-11-18 13:10:06.869475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-11-18 13:10:06.879435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.412 [2024-11-18 13:10:06.879501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.412 [2024-11-18 13:10:06.879515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.412 [2024-11-18 13:10:06.879523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.412 [2024-11-18 13:10:06.879529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.412 [2024-11-18 13:10:06.879545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-11-18 13:10:06.889413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.412 [2024-11-18 13:10:06.889466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.412 [2024-11-18 13:10:06.889485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.412 [2024-11-18 13:10:06.889493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.412 [2024-11-18 13:10:06.889499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.412 [2024-11-18 13:10:06.889516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-11-18 13:10:06.899549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.412 [2024-11-18 13:10:06.899622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.412 [2024-11-18 13:10:06.899636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.412 [2024-11-18 13:10:06.899644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.412 [2024-11-18 13:10:06.899650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.412 [2024-11-18 13:10:06.899666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-11-18 13:10:06.909501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.412 [2024-11-18 13:10:06.909587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.412 [2024-11-18 13:10:06.909602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.412 [2024-11-18 13:10:06.909609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.412 [2024-11-18 13:10:06.909616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.412 [2024-11-18 13:10:06.909631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-11-18 13:10:06.919511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.412 [2024-11-18 13:10:06.919564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.412 [2024-11-18 13:10:06.919578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.412 [2024-11-18 13:10:06.919585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.412 [2024-11-18 13:10:06.919592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.412 [2024-11-18 13:10:06.919607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-11-18 13:10:06.929524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.412 [2024-11-18 13:10:06.929588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.412 [2024-11-18 13:10:06.929601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.412 [2024-11-18 13:10:06.929609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.412 [2024-11-18 13:10:06.929618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.412 [2024-11-18 13:10:06.929633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-11-18 13:10:06.939518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.412 [2024-11-18 13:10:06.939588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.412 [2024-11-18 13:10:06.939602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.412 [2024-11-18 13:10:06.939610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.412 [2024-11-18 13:10:06.939616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.412 [2024-11-18 13:10:06.939632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-11-18 13:10:06.949607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.412 [2024-11-18 13:10:06.949694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.412 [2024-11-18 13:10:06.949709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.412 [2024-11-18 13:10:06.949716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.412 [2024-11-18 13:10:06.949722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.412 [2024-11-18 13:10:06.949737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-11-18 13:10:06.959617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.412 [2024-11-18 13:10:06.959676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.412 [2024-11-18 13:10:06.959691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.412 [2024-11-18 13:10:06.959700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.412 [2024-11-18 13:10:06.959706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.412 [2024-11-18 13:10:06.959722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-11-18 13:10:06.969662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.412 [2024-11-18 13:10:06.969733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.412 [2024-11-18 13:10:06.969748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.412 [2024-11-18 13:10:06.969756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.412 [2024-11-18 13:10:06.969762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.412 [2024-11-18 13:10:06.969778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-11-18 13:10:06.979611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.412 [2024-11-18 13:10:06.979669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.412 [2024-11-18 13:10:06.979683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.412 [2024-11-18 13:10:06.979691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.412 [2024-11-18 13:10:06.979697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.412 [2024-11-18 13:10:06.979713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-11-18 13:10:06.989643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.412 [2024-11-18 13:10:06.989698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.412 [2024-11-18 13:10:06.989713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.412 [2024-11-18 13:10:06.989720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.412 [2024-11-18 13:10:06.989727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.412 [2024-11-18 13:10:06.989742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-11-18 13:10:06.999769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.412 [2024-11-18 13:10:06.999825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.412 [2024-11-18 13:10:06.999839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.412 [2024-11-18 13:10:06.999847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.413 [2024-11-18 13:10:06.999854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.413 [2024-11-18 13:10:06.999870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-11-18 13:10:07.009691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.413 [2024-11-18 13:10:07.009748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.413 [2024-11-18 13:10:07.009762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.413 [2024-11-18 13:10:07.009770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.413 [2024-11-18 13:10:07.009776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.413 [2024-11-18 13:10:07.009792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-11-18 13:10:07.019786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.413 [2024-11-18 13:10:07.019845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.413 [2024-11-18 13:10:07.019859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.413 [2024-11-18 13:10:07.019866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.413 [2024-11-18 13:10:07.019873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.413 [2024-11-18 13:10:07.019888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-11-18 13:10:07.029793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.413 [2024-11-18 13:10:07.029857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.413 [2024-11-18 13:10:07.029872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.413 [2024-11-18 13:10:07.029879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.413 [2024-11-18 13:10:07.029886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.413 [2024-11-18 13:10:07.029902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-11-18 13:10:07.039862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.413 [2024-11-18 13:10:07.039916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.413 [2024-11-18 13:10:07.039929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.413 [2024-11-18 13:10:07.039936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.413 [2024-11-18 13:10:07.039943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.413 [2024-11-18 13:10:07.039958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-11-18 13:10:07.049814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.413 [2024-11-18 13:10:07.049868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.413 [2024-11-18 13:10:07.049882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.413 [2024-11-18 13:10:07.049889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.413 [2024-11-18 13:10:07.049896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.413 [2024-11-18 13:10:07.049912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-11-18 13:10:07.059884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.413 [2024-11-18 13:10:07.059940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.413 [2024-11-18 13:10:07.059954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.413 [2024-11-18 13:10:07.059964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.413 [2024-11-18 13:10:07.059971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.413 [2024-11-18 13:10:07.059986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-11-18 13:10:07.069855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.413 [2024-11-18 13:10:07.069907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.413 [2024-11-18 13:10:07.069921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.413 [2024-11-18 13:10:07.069929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.413 [2024-11-18 13:10:07.069935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.413 [2024-11-18 13:10:07.069951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-11-18 13:10:07.079988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.413 [2024-11-18 13:10:07.080039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.413 [2024-11-18 13:10:07.080053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.413 [2024-11-18 13:10:07.080061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.413 [2024-11-18 13:10:07.080068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.413 [2024-11-18 13:10:07.080083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-11-18 13:10:07.089956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.413 [2024-11-18 13:10:07.090047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.413 [2024-11-18 13:10:07.090061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.413 [2024-11-18 13:10:07.090068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.413 [2024-11-18 13:10:07.090074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.413 [2024-11-18 13:10:07.090089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-11-18 13:10:07.100029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.413 [2024-11-18 13:10:07.100091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.413 [2024-11-18 13:10:07.100105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.413 [2024-11-18 13:10:07.100113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.413 [2024-11-18 13:10:07.100120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.413 [2024-11-18 13:10:07.100140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.674 [2024-11-18 13:10:07.110053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.674 [2024-11-18 13:10:07.110113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.674 [2024-11-18 13:10:07.110129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.674 [2024-11-18 13:10:07.110137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.674 [2024-11-18 13:10:07.110144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.674 [2024-11-18 13:10:07.110160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.674 qpair failed and we were unable to recover it. 00:27:09.674 [2024-11-18 13:10:07.120060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.674 [2024-11-18 13:10:07.120116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.674 [2024-11-18 13:10:07.120130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.674 [2024-11-18 13:10:07.120138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.674 [2024-11-18 13:10:07.120145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.674 [2024-11-18 13:10:07.120161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.674 qpair failed and we were unable to recover it. 00:27:09.674 [2024-11-18 13:10:07.130077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.675 [2024-11-18 13:10:07.130165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.675 [2024-11-18 13:10:07.130179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.675 [2024-11-18 13:10:07.130186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.675 [2024-11-18 13:10:07.130192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.675 [2024-11-18 13:10:07.130207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.675 qpair failed and we were unable to recover it. 00:27:09.675 [2024-11-18 13:10:07.140067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.675 [2024-11-18 13:10:07.140124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.675 [2024-11-18 13:10:07.140138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.675 [2024-11-18 13:10:07.140146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.675 [2024-11-18 13:10:07.140152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.675 [2024-11-18 13:10:07.140168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.675 qpair failed and we were unable to recover it. 00:27:09.675 [2024-11-18 13:10:07.150141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.675 [2024-11-18 13:10:07.150225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.675 [2024-11-18 13:10:07.150239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.675 [2024-11-18 13:10:07.150246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.675 [2024-11-18 13:10:07.150253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.675 [2024-11-18 13:10:07.150269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.675 qpair failed and we were unable to recover it. 00:27:09.675 [2024-11-18 13:10:07.160125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.675 [2024-11-18 13:10:07.160213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.675 [2024-11-18 13:10:07.160227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.675 [2024-11-18 13:10:07.160235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.675 [2024-11-18 13:10:07.160241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.675 [2024-11-18 13:10:07.160256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.675 qpair failed and we were unable to recover it. 00:27:09.675 [2024-11-18 13:10:07.170209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.675 [2024-11-18 13:10:07.170262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.675 [2024-11-18 13:10:07.170276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.675 [2024-11-18 13:10:07.170284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.675 [2024-11-18 13:10:07.170291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.675 [2024-11-18 13:10:07.170307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.675 qpair failed and we were unable to recover it. 00:27:09.675 [2024-11-18 13:10:07.180223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.675 [2024-11-18 13:10:07.180279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.675 [2024-11-18 13:10:07.180293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.675 [2024-11-18 13:10:07.180300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.675 [2024-11-18 13:10:07.180307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.675 [2024-11-18 13:10:07.180323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.675 qpair failed and we were unable to recover it. 00:27:09.675 [2024-11-18 13:10:07.190198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.675 [2024-11-18 13:10:07.190254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.675 [2024-11-18 13:10:07.190267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.675 [2024-11-18 13:10:07.190277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.675 [2024-11-18 13:10:07.190283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.675 [2024-11-18 13:10:07.190298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.675 qpair failed and we were unable to recover it. 00:27:09.675 [2024-11-18 13:10:07.200276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.675 [2024-11-18 13:10:07.200332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.675 [2024-11-18 13:10:07.200346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.675 [2024-11-18 13:10:07.200357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.675 [2024-11-18 13:10:07.200365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.675 [2024-11-18 13:10:07.200380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.675 qpair failed and we were unable to recover it. 00:27:09.675 [2024-11-18 13:10:07.210287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.675 [2024-11-18 13:10:07.210369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.675 [2024-11-18 13:10:07.210384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.675 [2024-11-18 13:10:07.210391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.675 [2024-11-18 13:10:07.210398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.675 [2024-11-18 13:10:07.210413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.675 qpair failed and we were unable to recover it. 00:27:09.675 [2024-11-18 13:10:07.220290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.675 [2024-11-18 13:10:07.220346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.675 [2024-11-18 13:10:07.220364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.675 [2024-11-18 13:10:07.220372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.675 [2024-11-18 13:10:07.220379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.675 [2024-11-18 13:10:07.220395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.675 qpair failed and we were unable to recover it. 00:27:09.675 [2024-11-18 13:10:07.230361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.675 [2024-11-18 13:10:07.230445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.675 [2024-11-18 13:10:07.230459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.675 [2024-11-18 13:10:07.230467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.675 [2024-11-18 13:10:07.230474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.675 [2024-11-18 13:10:07.230492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.675 qpair failed and we were unable to recover it. 00:27:09.675 [2024-11-18 13:10:07.240365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.675 [2024-11-18 13:10:07.240420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.675 [2024-11-18 13:10:07.240434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.675 [2024-11-18 13:10:07.240441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.675 [2024-11-18 13:10:07.240448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.675 [2024-11-18 13:10:07.240464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.675 qpair failed and we were unable to recover it. 00:27:09.675 [2024-11-18 13:10:07.250413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.675 [2024-11-18 13:10:07.250467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.675 [2024-11-18 13:10:07.250481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.675 [2024-11-18 13:10:07.250488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.675 [2024-11-18 13:10:07.250495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.675 [2024-11-18 13:10:07.250511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.675 qpair failed and we were unable to recover it. 00:27:09.675 [2024-11-18 13:10:07.260395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.676 [2024-11-18 13:10:07.260449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.676 [2024-11-18 13:10:07.260463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.676 [2024-11-18 13:10:07.260470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.676 [2024-11-18 13:10:07.260477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.676 [2024-11-18 13:10:07.260493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.676 qpair failed and we were unable to recover it. 00:27:09.676 [2024-11-18 13:10:07.270478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.676 [2024-11-18 13:10:07.270559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.676 [2024-11-18 13:10:07.270573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.676 [2024-11-18 13:10:07.270580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.676 [2024-11-18 13:10:07.270588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.676 [2024-11-18 13:10:07.270603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.676 qpair failed and we were unable to recover it. 00:27:09.676 [2024-11-18 13:10:07.280461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.676 [2024-11-18 13:10:07.280553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.676 [2024-11-18 13:10:07.280566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.676 [2024-11-18 13:10:07.280574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.676 [2024-11-18 13:10:07.280580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.676 [2024-11-18 13:10:07.280596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.676 qpair failed and we were unable to recover it. 00:27:09.676 [2024-11-18 13:10:07.290536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.676 [2024-11-18 13:10:07.290593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.676 [2024-11-18 13:10:07.290608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.676 [2024-11-18 13:10:07.290616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.676 [2024-11-18 13:10:07.290622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.676 [2024-11-18 13:10:07.290638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.676 qpair failed and we were unable to recover it. 00:27:09.676 [2024-11-18 13:10:07.300531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.676 [2024-11-18 13:10:07.300589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.676 [2024-11-18 13:10:07.300603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.676 [2024-11-18 13:10:07.300610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.676 [2024-11-18 13:10:07.300616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.676 [2024-11-18 13:10:07.300632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.676 qpair failed and we were unable to recover it. 00:27:09.676 [2024-11-18 13:10:07.310536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.676 [2024-11-18 13:10:07.310590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.676 [2024-11-18 13:10:07.310604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.676 [2024-11-18 13:10:07.310611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.676 [2024-11-18 13:10:07.310618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.676 [2024-11-18 13:10:07.310633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.676 qpair failed and we were unable to recover it. 00:27:09.676 [2024-11-18 13:10:07.320562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.676 [2024-11-18 13:10:07.320620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.676 [2024-11-18 13:10:07.320637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.676 [2024-11-18 13:10:07.320646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.676 [2024-11-18 13:10:07.320654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.676 [2024-11-18 13:10:07.320669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.676 qpair failed and we were unable to recover it. 00:27:09.676 [2024-11-18 13:10:07.330596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.676 [2024-11-18 13:10:07.330652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.676 [2024-11-18 13:10:07.330666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.676 [2024-11-18 13:10:07.330673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.676 [2024-11-18 13:10:07.330679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.676 [2024-11-18 13:10:07.330695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.676 qpair failed and we were unable to recover it. 00:27:09.676 [2024-11-18 13:10:07.340722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.676 [2024-11-18 13:10:07.340783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.676 [2024-11-18 13:10:07.340797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.676 [2024-11-18 13:10:07.340804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.676 [2024-11-18 13:10:07.340810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.676 [2024-11-18 13:10:07.340826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.676 qpair failed and we were unable to recover it. 00:27:09.676 [2024-11-18 13:10:07.350720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.676 [2024-11-18 13:10:07.350779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.676 [2024-11-18 13:10:07.350794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.676 [2024-11-18 13:10:07.350802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.676 [2024-11-18 13:10:07.350810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.676 [2024-11-18 13:10:07.350826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.676 qpair failed and we were unable to recover it. 00:27:09.676 [2024-11-18 13:10:07.360777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.676 [2024-11-18 13:10:07.360843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.676 [2024-11-18 13:10:07.360857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.676 [2024-11-18 13:10:07.360865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.676 [2024-11-18 13:10:07.360875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.676 [2024-11-18 13:10:07.360891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.676 qpair failed and we were unable to recover it. 00:27:09.676 [2024-11-18 13:10:07.370771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.937 [2024-11-18 13:10:07.370825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.938 [2024-11-18 13:10:07.370840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.938 [2024-11-18 13:10:07.370848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.938 [2024-11-18 13:10:07.370855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.938 [2024-11-18 13:10:07.370872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.938 qpair failed and we were unable to recover it. 00:27:09.938 [2024-11-18 13:10:07.380728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.938 [2024-11-18 13:10:07.380796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.938 [2024-11-18 13:10:07.380810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.938 [2024-11-18 13:10:07.380817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.938 [2024-11-18 13:10:07.380823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.938 [2024-11-18 13:10:07.380839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.938 qpair failed and we were unable to recover it. 00:27:09.938 [2024-11-18 13:10:07.390843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.938 [2024-11-18 13:10:07.390899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.938 [2024-11-18 13:10:07.390915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.938 [2024-11-18 13:10:07.390922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.938 [2024-11-18 13:10:07.390929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.938 [2024-11-18 13:10:07.390945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.938 qpair failed and we were unable to recover it. 00:27:09.938 [2024-11-18 13:10:07.400796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.938 [2024-11-18 13:10:07.400852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.938 [2024-11-18 13:10:07.400866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.938 [2024-11-18 13:10:07.400874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.938 [2024-11-18 13:10:07.400880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.938 [2024-11-18 13:10:07.400895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.938 qpair failed and we were unable to recover it. 00:27:09.938 [2024-11-18 13:10:07.410803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.938 [2024-11-18 13:10:07.410851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.938 [2024-11-18 13:10:07.410866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.938 [2024-11-18 13:10:07.410873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.938 [2024-11-18 13:10:07.410880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.938 [2024-11-18 13:10:07.410896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.938 qpair failed and we were unable to recover it. 00:27:09.938 [2024-11-18 13:10:07.420845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.938 [2024-11-18 13:10:07.420901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.938 [2024-11-18 13:10:07.420915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.938 [2024-11-18 13:10:07.420923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.938 [2024-11-18 13:10:07.420929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.938 [2024-11-18 13:10:07.420945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.938 qpair failed and we were unable to recover it. 00:27:09.938 [2024-11-18 13:10:07.430962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.938 [2024-11-18 13:10:07.431017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.938 [2024-11-18 13:10:07.431031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.938 [2024-11-18 13:10:07.431038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.938 [2024-11-18 13:10:07.431044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.938 [2024-11-18 13:10:07.431060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.938 qpair failed and we were unable to recover it. 00:27:09.938 [2024-11-18 13:10:07.440897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.938 [2024-11-18 13:10:07.440964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.938 [2024-11-18 13:10:07.440978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.938 [2024-11-18 13:10:07.440985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.938 [2024-11-18 13:10:07.440992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.938 [2024-11-18 13:10:07.441008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.938 qpair failed and we were unable to recover it. 00:27:09.938 [2024-11-18 13:10:07.450999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.938 [2024-11-18 13:10:07.451050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.938 [2024-11-18 13:10:07.451070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.938 [2024-11-18 13:10:07.451077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.938 [2024-11-18 13:10:07.451084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.938 [2024-11-18 13:10:07.451099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.938 qpair failed and we were unable to recover it. 00:27:09.938 [2024-11-18 13:10:07.461036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.938 [2024-11-18 13:10:07.461090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.938 [2024-11-18 13:10:07.461105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.938 [2024-11-18 13:10:07.461113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.938 [2024-11-18 13:10:07.461120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.938 [2024-11-18 13:10:07.461135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.938 qpair failed and we were unable to recover it. 00:27:09.938 [2024-11-18 13:10:07.471102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.938 [2024-11-18 13:10:07.471153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.938 [2024-11-18 13:10:07.471166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.938 [2024-11-18 13:10:07.471174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.938 [2024-11-18 13:10:07.471180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.938 [2024-11-18 13:10:07.471196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.938 qpair failed and we were unable to recover it. 00:27:09.938 [2024-11-18 13:10:07.481079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.938 [2024-11-18 13:10:07.481133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.938 [2024-11-18 13:10:07.481147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.938 [2024-11-18 13:10:07.481154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.938 [2024-11-18 13:10:07.481161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.938 [2024-11-18 13:10:07.481177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.938 qpair failed and we were unable to recover it. 00:27:09.938 [2024-11-18 13:10:07.491110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.938 [2024-11-18 13:10:07.491163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.938 [2024-11-18 13:10:07.491177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.938 [2024-11-18 13:10:07.491185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.938 [2024-11-18 13:10:07.491195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.939 [2024-11-18 13:10:07.491211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.939 qpair failed and we were unable to recover it. 00:27:09.939 [2024-11-18 13:10:07.501189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.939 [2024-11-18 13:10:07.501244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.939 [2024-11-18 13:10:07.501258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.939 [2024-11-18 13:10:07.501266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.939 [2024-11-18 13:10:07.501273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.939 [2024-11-18 13:10:07.501289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.939 qpair failed and we were unable to recover it. 00:27:09.939 [2024-11-18 13:10:07.511171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.939 [2024-11-18 13:10:07.511225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.939 [2024-11-18 13:10:07.511239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.939 [2024-11-18 13:10:07.511246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.939 [2024-11-18 13:10:07.511253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.939 [2024-11-18 13:10:07.511269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.939 qpair failed and we were unable to recover it. 00:27:09.939 [2024-11-18 13:10:07.521206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.939 [2024-11-18 13:10:07.521261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.939 [2024-11-18 13:10:07.521276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.939 [2024-11-18 13:10:07.521284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.939 [2024-11-18 13:10:07.521290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.939 [2024-11-18 13:10:07.521305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.939 qpair failed and we were unable to recover it. 00:27:09.939 [2024-11-18 13:10:07.531223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.939 [2024-11-18 13:10:07.531311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.939 [2024-11-18 13:10:07.531326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.939 [2024-11-18 13:10:07.531333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.939 [2024-11-18 13:10:07.531339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.939 [2024-11-18 13:10:07.531358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.939 qpair failed and we were unable to recover it. 00:27:09.939 [2024-11-18 13:10:07.541262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.939 [2024-11-18 13:10:07.541329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.939 [2024-11-18 13:10:07.541343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.939 [2024-11-18 13:10:07.541355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.939 [2024-11-18 13:10:07.541362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.939 [2024-11-18 13:10:07.541378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.939 qpair failed and we were unable to recover it. 00:27:09.939 [2024-11-18 13:10:07.551298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.939 [2024-11-18 13:10:07.551357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.939 [2024-11-18 13:10:07.551371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.939 [2024-11-18 13:10:07.551379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.939 [2024-11-18 13:10:07.551385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.939 [2024-11-18 13:10:07.551400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.939 qpair failed and we were unable to recover it. 00:27:09.939 [2024-11-18 13:10:07.561320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.939 [2024-11-18 13:10:07.561380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.939 [2024-11-18 13:10:07.561395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.939 [2024-11-18 13:10:07.561403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.939 [2024-11-18 13:10:07.561409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.939 [2024-11-18 13:10:07.561424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.939 qpair failed and we were unable to recover it. 00:27:09.939 [2024-11-18 13:10:07.571346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.939 [2024-11-18 13:10:07.571434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.939 [2024-11-18 13:10:07.571447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.939 [2024-11-18 13:10:07.571455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.939 [2024-11-18 13:10:07.571462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.939 [2024-11-18 13:10:07.571477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.939 qpair failed and we were unable to recover it. 00:27:09.939 [2024-11-18 13:10:07.581379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.939 [2024-11-18 13:10:07.581441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.939 [2024-11-18 13:10:07.581455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.939 [2024-11-18 13:10:07.581463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.939 [2024-11-18 13:10:07.581470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.939 [2024-11-18 13:10:07.581485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.939 qpair failed and we were unable to recover it. 00:27:09.939 [2024-11-18 13:10:07.591410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.939 [2024-11-18 13:10:07.591482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.939 [2024-11-18 13:10:07.591496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.939 [2024-11-18 13:10:07.591504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.939 [2024-11-18 13:10:07.591510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.939 [2024-11-18 13:10:07.591525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.939 qpair failed and we were unable to recover it. 00:27:09.939 [2024-11-18 13:10:07.601435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.939 [2024-11-18 13:10:07.601490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.939 [2024-11-18 13:10:07.601504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.939 [2024-11-18 13:10:07.601511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.939 [2024-11-18 13:10:07.601518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.939 [2024-11-18 13:10:07.601533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.939 qpair failed and we were unable to recover it. 00:27:09.939 [2024-11-18 13:10:07.611454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.939 [2024-11-18 13:10:07.611509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.939 [2024-11-18 13:10:07.611523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.939 [2024-11-18 13:10:07.611531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.939 [2024-11-18 13:10:07.611538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.939 [2024-11-18 13:10:07.611554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.939 qpair failed and we were unable to recover it. 00:27:09.939 [2024-11-18 13:10:07.621497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.939 [2024-11-18 13:10:07.621558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.939 [2024-11-18 13:10:07.621573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.939 [2024-11-18 13:10:07.621583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.940 [2024-11-18 13:10:07.621590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.940 [2024-11-18 13:10:07.621605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.940 qpair failed and we were unable to recover it. 00:27:09.940 [2024-11-18 13:10:07.631531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.940 [2024-11-18 13:10:07.631587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.940 [2024-11-18 13:10:07.631601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.940 [2024-11-18 13:10:07.631609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.940 [2024-11-18 13:10:07.631615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:09.940 [2024-11-18 13:10:07.631631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.940 qpair failed and we were unable to recover it. 00:27:10.200 [2024-11-18 13:10:07.641551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.200 [2024-11-18 13:10:07.641610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.200 [2024-11-18 13:10:07.641625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.200 [2024-11-18 13:10:07.641633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.200 [2024-11-18 13:10:07.641640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.200 [2024-11-18 13:10:07.641655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.200 qpair failed and we were unable to recover it. 00:27:10.200 [2024-11-18 13:10:07.651634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.200 [2024-11-18 13:10:07.651738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.200 [2024-11-18 13:10:07.651752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.200 [2024-11-18 13:10:07.651760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.200 [2024-11-18 13:10:07.651768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.200 [2024-11-18 13:10:07.651784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.200 qpair failed and we were unable to recover it. 00:27:10.200 [2024-11-18 13:10:07.661612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.200 [2024-11-18 13:10:07.661668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.200 [2024-11-18 13:10:07.661682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.200 [2024-11-18 13:10:07.661689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.200 [2024-11-18 13:10:07.661696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.200 [2024-11-18 13:10:07.661715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.200 qpair failed and we were unable to recover it. 00:27:10.200 [2024-11-18 13:10:07.671645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.200 [2024-11-18 13:10:07.671702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.200 [2024-11-18 13:10:07.671716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.200 [2024-11-18 13:10:07.671724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.200 [2024-11-18 13:10:07.671731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.200 [2024-11-18 13:10:07.671746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.200 qpair failed and we were unable to recover it. 00:27:10.200 [2024-11-18 13:10:07.681664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.200 [2024-11-18 13:10:07.681723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.200 [2024-11-18 13:10:07.681738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.200 [2024-11-18 13:10:07.681746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.200 [2024-11-18 13:10:07.681752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.200 [2024-11-18 13:10:07.681768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.200 qpair failed and we were unable to recover it. 00:27:10.200 [2024-11-18 13:10:07.691687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.200 [2024-11-18 13:10:07.691739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.200 [2024-11-18 13:10:07.691752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.200 [2024-11-18 13:10:07.691760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.200 [2024-11-18 13:10:07.691767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.200 [2024-11-18 13:10:07.691782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.200 qpair failed and we were unable to recover it. 00:27:10.200 [2024-11-18 13:10:07.701724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.200 [2024-11-18 13:10:07.701781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.200 [2024-11-18 13:10:07.701795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.200 [2024-11-18 13:10:07.701802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.200 [2024-11-18 13:10:07.701809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.201 [2024-11-18 13:10:07.701825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.201 qpair failed and we were unable to recover it. 00:27:10.201 [2024-11-18 13:10:07.711735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.201 [2024-11-18 13:10:07.711796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.201 [2024-11-18 13:10:07.711812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.201 [2024-11-18 13:10:07.711820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.201 [2024-11-18 13:10:07.711827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.201 [2024-11-18 13:10:07.711844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.201 qpair failed and we were unable to recover it. 00:27:10.201 [2024-11-18 13:10:07.721707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.201 [2024-11-18 13:10:07.721765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.201 [2024-11-18 13:10:07.721779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.201 [2024-11-18 13:10:07.721787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.201 [2024-11-18 13:10:07.721793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.201 [2024-11-18 13:10:07.721808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.201 qpair failed and we were unable to recover it. 00:27:10.201 [2024-11-18 13:10:07.731808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.201 [2024-11-18 13:10:07.731867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.201 [2024-11-18 13:10:07.731881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.201 [2024-11-18 13:10:07.731888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.201 [2024-11-18 13:10:07.731895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.201 [2024-11-18 13:10:07.731910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.201 qpair failed and we were unable to recover it. 00:27:10.201 [2024-11-18 13:10:07.741842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.201 [2024-11-18 13:10:07.741915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.201 [2024-11-18 13:10:07.741930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.201 [2024-11-18 13:10:07.741937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.201 [2024-11-18 13:10:07.741945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.201 [2024-11-18 13:10:07.741960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.201 qpair failed and we were unable to recover it. 00:27:10.201 [2024-11-18 13:10:07.751844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.201 [2024-11-18 13:10:07.751906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.201 [2024-11-18 13:10:07.751924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.201 [2024-11-18 13:10:07.751932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.201 [2024-11-18 13:10:07.751938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.201 [2024-11-18 13:10:07.751954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.201 qpair failed and we were unable to recover it. 00:27:10.201 [2024-11-18 13:10:07.761926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.201 [2024-11-18 13:10:07.761985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.201 [2024-11-18 13:10:07.761999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.201 [2024-11-18 13:10:07.762007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.201 [2024-11-18 13:10:07.762013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.201 [2024-11-18 13:10:07.762029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.201 qpair failed and we were unable to recover it. 00:27:10.201 [2024-11-18 13:10:07.771895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.201 [2024-11-18 13:10:07.771953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.201 [2024-11-18 13:10:07.771967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.201 [2024-11-18 13:10:07.771975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.201 [2024-11-18 13:10:07.771982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.201 [2024-11-18 13:10:07.771997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.201 qpair failed and we were unable to recover it. 00:27:10.201 [2024-11-18 13:10:07.781958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.201 [2024-11-18 13:10:07.782012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.201 [2024-11-18 13:10:07.782026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.201 [2024-11-18 13:10:07.782033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.201 [2024-11-18 13:10:07.782040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.201 [2024-11-18 13:10:07.782055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.201 qpair failed and we were unable to recover it. 00:27:10.201 [2024-11-18 13:10:07.791994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.201 [2024-11-18 13:10:07.792049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.201 [2024-11-18 13:10:07.792063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.201 [2024-11-18 13:10:07.792070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.201 [2024-11-18 13:10:07.792077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.201 [2024-11-18 13:10:07.792096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.201 qpair failed and we were unable to recover it. 00:27:10.201 [2024-11-18 13:10:07.802003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.201 [2024-11-18 13:10:07.802056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.201 [2024-11-18 13:10:07.802070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.201 [2024-11-18 13:10:07.802077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.201 [2024-11-18 13:10:07.802084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.201 [2024-11-18 13:10:07.802100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.201 qpair failed and we were unable to recover it. 00:27:10.201 [2024-11-18 13:10:07.812042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.201 [2024-11-18 13:10:07.812098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.201 [2024-11-18 13:10:07.812112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.201 [2024-11-18 13:10:07.812120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.201 [2024-11-18 13:10:07.812126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.201 [2024-11-18 13:10:07.812141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.201 qpair failed and we were unable to recover it. 00:27:10.201 [2024-11-18 13:10:07.822070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.201 [2024-11-18 13:10:07.822127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.201 [2024-11-18 13:10:07.822140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.201 [2024-11-18 13:10:07.822148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.201 [2024-11-18 13:10:07.822155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.201 [2024-11-18 13:10:07.822171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.201 qpair failed and we were unable to recover it. 00:27:10.201 [2024-11-18 13:10:07.832098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.201 [2024-11-18 13:10:07.832176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.201 [2024-11-18 13:10:07.832190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.201 [2024-11-18 13:10:07.832198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.201 [2024-11-18 13:10:07.832204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.202 [2024-11-18 13:10:07.832220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.202 qpair failed and we were unable to recover it. 00:27:10.202 [2024-11-18 13:10:07.842119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.202 [2024-11-18 13:10:07.842173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.202 [2024-11-18 13:10:07.842187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.202 [2024-11-18 13:10:07.842194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.202 [2024-11-18 13:10:07.842200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.202 [2024-11-18 13:10:07.842216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.202 qpair failed and we were unable to recover it. 00:27:10.202 [2024-11-18 13:10:07.852149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.202 [2024-11-18 13:10:07.852201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.202 [2024-11-18 13:10:07.852216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.202 [2024-11-18 13:10:07.852223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.202 [2024-11-18 13:10:07.852230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.202 [2024-11-18 13:10:07.852246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.202 qpair failed and we were unable to recover it. 00:27:10.202 [2024-11-18 13:10:07.862192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.202 [2024-11-18 13:10:07.862249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.202 [2024-11-18 13:10:07.862263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.202 [2024-11-18 13:10:07.862270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.202 [2024-11-18 13:10:07.862277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.202 [2024-11-18 13:10:07.862294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.202 qpair failed and we were unable to recover it. 00:27:10.202 [2024-11-18 13:10:07.872213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.202 [2024-11-18 13:10:07.872269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.202 [2024-11-18 13:10:07.872283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.202 [2024-11-18 13:10:07.872290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.202 [2024-11-18 13:10:07.872297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.202 [2024-11-18 13:10:07.872313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.202 qpair failed and we were unable to recover it. 00:27:10.202 [2024-11-18 13:10:07.882240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.202 [2024-11-18 13:10:07.882294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.202 [2024-11-18 13:10:07.882312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.202 [2024-11-18 13:10:07.882319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.202 [2024-11-18 13:10:07.882325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.202 [2024-11-18 13:10:07.882341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.202 qpair failed and we were unable to recover it. 00:27:10.202 [2024-11-18 13:10:07.892261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.202 [2024-11-18 13:10:07.892313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.202 [2024-11-18 13:10:07.892328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.202 [2024-11-18 13:10:07.892335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.202 [2024-11-18 13:10:07.892341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.202 [2024-11-18 13:10:07.892361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.202 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-18 13:10:07.902280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.463 [2024-11-18 13:10:07.902335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.463 [2024-11-18 13:10:07.902349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.463 [2024-11-18 13:10:07.902362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.463 [2024-11-18 13:10:07.902368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.463 [2024-11-18 13:10:07.902384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-18 13:10:07.912323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.463 [2024-11-18 13:10:07.912382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.463 [2024-11-18 13:10:07.912396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.463 [2024-11-18 13:10:07.912403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.463 [2024-11-18 13:10:07.912409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.463 [2024-11-18 13:10:07.912424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-18 13:10:07.922378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.463 [2024-11-18 13:10:07.922436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.463 [2024-11-18 13:10:07.922450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.463 [2024-11-18 13:10:07.922458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.463 [2024-11-18 13:10:07.922468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.463 [2024-11-18 13:10:07.922484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-18 13:10:07.932388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.463 [2024-11-18 13:10:07.932462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.463 [2024-11-18 13:10:07.932477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.463 [2024-11-18 13:10:07.932484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.463 [2024-11-18 13:10:07.932490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.463 [2024-11-18 13:10:07.932507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-18 13:10:07.942406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.463 [2024-11-18 13:10:07.942508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.463 [2024-11-18 13:10:07.942524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.463 [2024-11-18 13:10:07.942532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.463 [2024-11-18 13:10:07.942538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.463 [2024-11-18 13:10:07.942554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-18 13:10:07.952432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.463 [2024-11-18 13:10:07.952492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.463 [2024-11-18 13:10:07.952506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.463 [2024-11-18 13:10:07.952514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.463 [2024-11-18 13:10:07.952520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.463 [2024-11-18 13:10:07.952535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-18 13:10:07.962459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.463 [2024-11-18 13:10:07.962515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.463 [2024-11-18 13:10:07.962529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.463 [2024-11-18 13:10:07.962537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.463 [2024-11-18 13:10:07.962543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.463 [2024-11-18 13:10:07.962559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-18 13:10:07.972443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.463 [2024-11-18 13:10:07.972507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.463 [2024-11-18 13:10:07.972520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.463 [2024-11-18 13:10:07.972528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.463 [2024-11-18 13:10:07.972534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.463 [2024-11-18 13:10:07.972550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-18 13:10:07.982543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.463 [2024-11-18 13:10:07.982606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.463 [2024-11-18 13:10:07.982620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.463 [2024-11-18 13:10:07.982628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.463 [2024-11-18 13:10:07.982635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.463 [2024-11-18 13:10:07.982651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-18 13:10:07.992546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.463 [2024-11-18 13:10:07.992600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.463 [2024-11-18 13:10:07.992614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.463 [2024-11-18 13:10:07.992621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.463 [2024-11-18 13:10:07.992628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.463 [2024-11-18 13:10:07.992644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-18 13:10:08.002570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.463 [2024-11-18 13:10:08.002628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.463 [2024-11-18 13:10:08.002641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.463 [2024-11-18 13:10:08.002649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.463 [2024-11-18 13:10:08.002656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.463 [2024-11-18 13:10:08.002671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.463 qpair failed and we were unable to recover it. 00:27:10.463 [2024-11-18 13:10:08.012592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.463 [2024-11-18 13:10:08.012643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.463 [2024-11-18 13:10:08.012660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.463 [2024-11-18 13:10:08.012667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.463 [2024-11-18 13:10:08.012674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.463 [2024-11-18 13:10:08.012690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-18 13:10:08.022616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.464 [2024-11-18 13:10:08.022674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.464 [2024-11-18 13:10:08.022687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.464 [2024-11-18 13:10:08.022695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.464 [2024-11-18 13:10:08.022701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.464 [2024-11-18 13:10:08.022717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-18 13:10:08.032667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.464 [2024-11-18 13:10:08.032721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.464 [2024-11-18 13:10:08.032735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.464 [2024-11-18 13:10:08.032743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.464 [2024-11-18 13:10:08.032750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.464 [2024-11-18 13:10:08.032766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-18 13:10:08.042698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.464 [2024-11-18 13:10:08.042753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.464 [2024-11-18 13:10:08.042768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.464 [2024-11-18 13:10:08.042776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.464 [2024-11-18 13:10:08.042783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.464 [2024-11-18 13:10:08.042799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-18 13:10:08.052713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.464 [2024-11-18 13:10:08.052766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.464 [2024-11-18 13:10:08.052781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.464 [2024-11-18 13:10:08.052789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.464 [2024-11-18 13:10:08.052798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.464 [2024-11-18 13:10:08.052814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-18 13:10:08.062721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.464 [2024-11-18 13:10:08.062780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.464 [2024-11-18 13:10:08.062794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.464 [2024-11-18 13:10:08.062801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.464 [2024-11-18 13:10:08.062808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.464 [2024-11-18 13:10:08.062823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-18 13:10:08.072809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.464 [2024-11-18 13:10:08.072867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.464 [2024-11-18 13:10:08.072882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.464 [2024-11-18 13:10:08.072890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.464 [2024-11-18 13:10:08.072897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.464 [2024-11-18 13:10:08.072912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-18 13:10:08.082798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.464 [2024-11-18 13:10:08.082848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.464 [2024-11-18 13:10:08.082861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.464 [2024-11-18 13:10:08.082869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.464 [2024-11-18 13:10:08.082875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.464 [2024-11-18 13:10:08.082891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-18 13:10:08.092829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.464 [2024-11-18 13:10:08.092881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.464 [2024-11-18 13:10:08.092895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.464 [2024-11-18 13:10:08.092902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.464 [2024-11-18 13:10:08.092909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.464 [2024-11-18 13:10:08.092924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-18 13:10:08.102886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.464 [2024-11-18 13:10:08.102942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.464 [2024-11-18 13:10:08.102958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.464 [2024-11-18 13:10:08.102966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.464 [2024-11-18 13:10:08.102973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.464 [2024-11-18 13:10:08.102989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-18 13:10:08.112871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.464 [2024-11-18 13:10:08.112924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.464 [2024-11-18 13:10:08.112938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.464 [2024-11-18 13:10:08.112945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.464 [2024-11-18 13:10:08.112952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.464 [2024-11-18 13:10:08.112968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-18 13:10:08.122917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.464 [2024-11-18 13:10:08.122987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.464 [2024-11-18 13:10:08.123001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.464 [2024-11-18 13:10:08.123008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.464 [2024-11-18 13:10:08.123014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.464 [2024-11-18 13:10:08.123030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-18 13:10:08.132958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.464 [2024-11-18 13:10:08.133011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.464 [2024-11-18 13:10:08.133025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.464 [2024-11-18 13:10:08.133032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.464 [2024-11-18 13:10:08.133039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.464 [2024-11-18 13:10:08.133056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.464 [2024-11-18 13:10:08.142987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.464 [2024-11-18 13:10:08.143065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.464 [2024-11-18 13:10:08.143080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.464 [2024-11-18 13:10:08.143088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.464 [2024-11-18 13:10:08.143094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.464 [2024-11-18 13:10:08.143109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.464 qpair failed and we were unable to recover it. 00:27:10.465 [2024-11-18 13:10:08.153006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.465 [2024-11-18 13:10:08.153094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.465 [2024-11-18 13:10:08.153108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.465 [2024-11-18 13:10:08.153115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.465 [2024-11-18 13:10:08.153122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.465 [2024-11-18 13:10:08.153138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.465 qpair failed and we were unable to recover it. 00:27:10.725 [2024-11-18 13:10:08.163079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.725 [2024-11-18 13:10:08.163134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.725 [2024-11-18 13:10:08.163149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.725 [2024-11-18 13:10:08.163156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.725 [2024-11-18 13:10:08.163164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.725 [2024-11-18 13:10:08.163180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.725 qpair failed and we were unable to recover it. 00:27:10.725 [2024-11-18 13:10:08.173056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.725 [2024-11-18 13:10:08.173108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.725 [2024-11-18 13:10:08.173122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.725 [2024-11-18 13:10:08.173129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.725 [2024-11-18 13:10:08.173136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.725 [2024-11-18 13:10:08.173152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.725 qpair failed and we were unable to recover it. 00:27:10.725 [2024-11-18 13:10:08.183097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.725 [2024-11-18 13:10:08.183153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.725 [2024-11-18 13:10:08.183167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.725 [2024-11-18 13:10:08.183178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.725 [2024-11-18 13:10:08.183185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.725 [2024-11-18 13:10:08.183200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.725 qpair failed and we were unable to recover it. 00:27:10.725 [2024-11-18 13:10:08.193118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.725 [2024-11-18 13:10:08.193174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.725 [2024-11-18 13:10:08.193190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.726 [2024-11-18 13:10:08.193198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.726 [2024-11-18 13:10:08.193205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.726 [2024-11-18 13:10:08.193220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.726 qpair failed and we were unable to recover it. 00:27:10.726 [2024-11-18 13:10:08.203133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.726 [2024-11-18 13:10:08.203190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.726 [2024-11-18 13:10:08.203204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.726 [2024-11-18 13:10:08.203212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.726 [2024-11-18 13:10:08.203218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.726 [2024-11-18 13:10:08.203234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.726 qpair failed and we were unable to recover it. 00:27:10.726 [2024-11-18 13:10:08.213191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.726 [2024-11-18 13:10:08.213243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.726 [2024-11-18 13:10:08.213257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.726 [2024-11-18 13:10:08.213264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.726 [2024-11-18 13:10:08.213271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.726 [2024-11-18 13:10:08.213287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.726 qpair failed and we were unable to recover it. 00:27:10.726 [2024-11-18 13:10:08.223208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.726 [2024-11-18 13:10:08.223264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.726 [2024-11-18 13:10:08.223278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.726 [2024-11-18 13:10:08.223285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.726 [2024-11-18 13:10:08.223293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.726 [2024-11-18 13:10:08.223315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.726 qpair failed and we were unable to recover it. 00:27:10.726 [2024-11-18 13:10:08.233237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.726 [2024-11-18 13:10:08.233297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.726 [2024-11-18 13:10:08.233311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.726 [2024-11-18 13:10:08.233319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.726 [2024-11-18 13:10:08.233326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.726 [2024-11-18 13:10:08.233341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.726 qpair failed and we were unable to recover it. 00:27:10.726 [2024-11-18 13:10:08.243266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.726 [2024-11-18 13:10:08.243322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.726 [2024-11-18 13:10:08.243336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.726 [2024-11-18 13:10:08.243343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.726 [2024-11-18 13:10:08.243350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.726 [2024-11-18 13:10:08.243369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.726 qpair failed and we were unable to recover it. 00:27:10.726 [2024-11-18 13:10:08.253291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.726 [2024-11-18 13:10:08.253385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.726 [2024-11-18 13:10:08.253400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.726 [2024-11-18 13:10:08.253407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.726 [2024-11-18 13:10:08.253413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.726 [2024-11-18 13:10:08.253429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.726 qpair failed and we were unable to recover it. 00:27:10.726 [2024-11-18 13:10:08.263381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.726 [2024-11-18 13:10:08.263484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.726 [2024-11-18 13:10:08.263498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.726 [2024-11-18 13:10:08.263506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.726 [2024-11-18 13:10:08.263512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.726 [2024-11-18 13:10:08.263528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.726 qpair failed and we were unable to recover it. 00:27:10.726 [2024-11-18 13:10:08.273350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.726 [2024-11-18 13:10:08.273411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.726 [2024-11-18 13:10:08.273426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.726 [2024-11-18 13:10:08.273433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.726 [2024-11-18 13:10:08.273440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.726 [2024-11-18 13:10:08.273456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.726 qpair failed and we were unable to recover it. 00:27:10.726 [2024-11-18 13:10:08.283382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.726 [2024-11-18 13:10:08.283436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.726 [2024-11-18 13:10:08.283451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.726 [2024-11-18 13:10:08.283458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.726 [2024-11-18 13:10:08.283465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.726 [2024-11-18 13:10:08.283479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.726 qpair failed and we were unable to recover it. 00:27:10.726 [2024-11-18 13:10:08.293412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.727 [2024-11-18 13:10:08.293471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.727 [2024-11-18 13:10:08.293487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.727 [2024-11-18 13:10:08.293494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.727 [2024-11-18 13:10:08.293501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.727 [2024-11-18 13:10:08.293517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.727 qpair failed and we were unable to recover it. 00:27:10.727 [2024-11-18 13:10:08.303455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.727 [2024-11-18 13:10:08.303514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.727 [2024-11-18 13:10:08.303528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.727 [2024-11-18 13:10:08.303535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.727 [2024-11-18 13:10:08.303542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.727 [2024-11-18 13:10:08.303557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.727 qpair failed and we were unable to recover it. 00:27:10.727 [2024-11-18 13:10:08.313491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.727 [2024-11-18 13:10:08.313550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.727 [2024-11-18 13:10:08.313567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.727 [2024-11-18 13:10:08.313575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.727 [2024-11-18 13:10:08.313581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.727 [2024-11-18 13:10:08.313597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.727 qpair failed and we were unable to recover it. 00:27:10.727 [2024-11-18 13:10:08.323480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.727 [2024-11-18 13:10:08.323539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.727 [2024-11-18 13:10:08.323554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.727 [2024-11-18 13:10:08.323561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.727 [2024-11-18 13:10:08.323568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.727 [2024-11-18 13:10:08.323584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.727 qpair failed and we were unable to recover it. 00:27:10.727 [2024-11-18 13:10:08.333533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.727 [2024-11-18 13:10:08.333601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.727 [2024-11-18 13:10:08.333616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.727 [2024-11-18 13:10:08.333623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.727 [2024-11-18 13:10:08.333630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.727 [2024-11-18 13:10:08.333646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.727 qpair failed and we were unable to recover it. 00:27:10.727 [2024-11-18 13:10:08.343602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.727 [2024-11-18 13:10:08.343662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.727 [2024-11-18 13:10:08.343677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.727 [2024-11-18 13:10:08.343685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.727 [2024-11-18 13:10:08.343691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.727 [2024-11-18 13:10:08.343707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.727 qpair failed and we were unable to recover it. 00:27:10.727 [2024-11-18 13:10:08.353658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.727 [2024-11-18 13:10:08.353755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.727 [2024-11-18 13:10:08.353770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.727 [2024-11-18 13:10:08.353778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.727 [2024-11-18 13:10:08.353784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.727 [2024-11-18 13:10:08.353803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.727 qpair failed and we were unable to recover it. 00:27:10.727 [2024-11-18 13:10:08.363657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.727 [2024-11-18 13:10:08.363708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.727 [2024-11-18 13:10:08.363722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.727 [2024-11-18 13:10:08.363729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.727 [2024-11-18 13:10:08.363736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.727 [2024-11-18 13:10:08.363750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.727 qpair failed and we were unable to recover it. 00:27:10.727 [2024-11-18 13:10:08.373660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.727 [2024-11-18 13:10:08.373715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.727 [2024-11-18 13:10:08.373731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.727 [2024-11-18 13:10:08.373740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.727 [2024-11-18 13:10:08.373746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.727 [2024-11-18 13:10:08.373761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.727 qpair failed and we were unable to recover it. 00:27:10.727 [2024-11-18 13:10:08.383715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.727 [2024-11-18 13:10:08.383815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.727 [2024-11-18 13:10:08.383830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.727 [2024-11-18 13:10:08.383837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.728 [2024-11-18 13:10:08.383843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.728 [2024-11-18 13:10:08.383859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.728 qpair failed and we were unable to recover it. 00:27:10.728 [2024-11-18 13:10:08.393730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.728 [2024-11-18 13:10:08.393809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.728 [2024-11-18 13:10:08.393824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.728 [2024-11-18 13:10:08.393832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.728 [2024-11-18 13:10:08.393838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.728 [2024-11-18 13:10:08.393853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.728 qpair failed and we were unable to recover it. 00:27:10.728 [2024-11-18 13:10:08.403733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.728 [2024-11-18 13:10:08.403784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.728 [2024-11-18 13:10:08.403799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.728 [2024-11-18 13:10:08.403807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.728 [2024-11-18 13:10:08.403814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.728 [2024-11-18 13:10:08.403830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.728 qpair failed and we were unable to recover it. 00:27:10.728 [2024-11-18 13:10:08.413772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.728 [2024-11-18 13:10:08.413842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.728 [2024-11-18 13:10:08.413857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.728 [2024-11-18 13:10:08.413865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.728 [2024-11-18 13:10:08.413871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.728 [2024-11-18 13:10:08.413886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.728 qpair failed and we were unable to recover it. 00:27:10.988 [2024-11-18 13:10:08.423788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.988 [2024-11-18 13:10:08.423843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.988 [2024-11-18 13:10:08.423858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.988 [2024-11-18 13:10:08.423866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.988 [2024-11-18 13:10:08.423873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.988 [2024-11-18 13:10:08.423888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.988 qpair failed and we were unable to recover it. 00:27:10.988 [2024-11-18 13:10:08.433754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.988 [2024-11-18 13:10:08.433839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.988 [2024-11-18 13:10:08.433854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.988 [2024-11-18 13:10:08.433862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.988 [2024-11-18 13:10:08.433869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.988 [2024-11-18 13:10:08.433884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.988 qpair failed and we were unable to recover it. 00:27:10.988 [2024-11-18 13:10:08.443839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.988 [2024-11-18 13:10:08.443891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.989 [2024-11-18 13:10:08.443909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.989 [2024-11-18 13:10:08.443917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.989 [2024-11-18 13:10:08.443924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.989 [2024-11-18 13:10:08.443940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.989 qpair failed and we were unable to recover it. 00:27:10.989 [2024-11-18 13:10:08.453880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.989 [2024-11-18 13:10:08.453929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.989 [2024-11-18 13:10:08.453943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.989 [2024-11-18 13:10:08.453950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.989 [2024-11-18 13:10:08.453957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.989 [2024-11-18 13:10:08.453973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.989 qpair failed and we were unable to recover it. 00:27:10.989 [2024-11-18 13:10:08.463897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.989 [2024-11-18 13:10:08.463953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.989 [2024-11-18 13:10:08.463967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.989 [2024-11-18 13:10:08.463974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.989 [2024-11-18 13:10:08.463981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.989 [2024-11-18 13:10:08.463996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.989 qpair failed and we were unable to recover it. 00:27:10.989 [2024-11-18 13:10:08.473932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.989 [2024-11-18 13:10:08.473990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.989 [2024-11-18 13:10:08.474005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.989 [2024-11-18 13:10:08.474013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.989 [2024-11-18 13:10:08.474020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.989 [2024-11-18 13:10:08.474037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.989 qpair failed and we were unable to recover it. 00:27:10.989 [2024-11-18 13:10:08.483955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.989 [2024-11-18 13:10:08.484009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.989 [2024-11-18 13:10:08.484024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.989 [2024-11-18 13:10:08.484031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.989 [2024-11-18 13:10:08.484041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.989 [2024-11-18 13:10:08.484057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.989 qpair failed and we were unable to recover it. 00:27:10.989 [2024-11-18 13:10:08.493988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.989 [2024-11-18 13:10:08.494044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.989 [2024-11-18 13:10:08.494060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.989 [2024-11-18 13:10:08.494067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.989 [2024-11-18 13:10:08.494075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.989 [2024-11-18 13:10:08.494090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.989 qpair failed and we were unable to recover it. 00:27:10.989 [2024-11-18 13:10:08.503941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.989 [2024-11-18 13:10:08.503994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.989 [2024-11-18 13:10:08.504008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.989 [2024-11-18 13:10:08.504015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.989 [2024-11-18 13:10:08.504022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.989 [2024-11-18 13:10:08.504037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.989 qpair failed and we were unable to recover it. 00:27:10.989 [2024-11-18 13:10:08.514057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.989 [2024-11-18 13:10:08.514111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.989 [2024-11-18 13:10:08.514127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.989 [2024-11-18 13:10:08.514135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.989 [2024-11-18 13:10:08.514142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.989 [2024-11-18 13:10:08.514158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.989 qpair failed and we were unable to recover it. 00:27:10.989 [2024-11-18 13:10:08.524077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.989 [2024-11-18 13:10:08.524131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.989 [2024-11-18 13:10:08.524145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.989 [2024-11-18 13:10:08.524152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.989 [2024-11-18 13:10:08.524159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.989 [2024-11-18 13:10:08.524175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.989 qpair failed and we were unable to recover it. 00:27:10.989 [2024-11-18 13:10:08.534137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.989 [2024-11-18 13:10:08.534226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.989 [2024-11-18 13:10:08.534240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.989 [2024-11-18 13:10:08.534247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.990 [2024-11-18 13:10:08.534254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.990 [2024-11-18 13:10:08.534269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.990 qpair failed and we were unable to recover it. 00:27:10.990 [2024-11-18 13:10:08.544142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.990 [2024-11-18 13:10:08.544207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.990 [2024-11-18 13:10:08.544222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.990 [2024-11-18 13:10:08.544230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.990 [2024-11-18 13:10:08.544236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.990 [2024-11-18 13:10:08.544252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.990 qpair failed and we were unable to recover it. 00:27:10.990 [2024-11-18 13:10:08.554181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.990 [2024-11-18 13:10:08.554233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.990 [2024-11-18 13:10:08.554248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.990 [2024-11-18 13:10:08.554255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.990 [2024-11-18 13:10:08.554262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.990 [2024-11-18 13:10:08.554278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.990 qpair failed and we were unable to recover it. 00:27:10.990 [2024-11-18 13:10:08.564211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.990 [2024-11-18 13:10:08.564268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.990 [2024-11-18 13:10:08.564283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.990 [2024-11-18 13:10:08.564290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.990 [2024-11-18 13:10:08.564296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.990 [2024-11-18 13:10:08.564312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.990 qpair failed and we were unable to recover it. 00:27:10.990 [2024-11-18 13:10:08.574229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.990 [2024-11-18 13:10:08.574299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.990 [2024-11-18 13:10:08.574318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.990 [2024-11-18 13:10:08.574325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.990 [2024-11-18 13:10:08.574331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.990 [2024-11-18 13:10:08.574347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.990 qpair failed and we were unable to recover it. 00:27:10.990 [2024-11-18 13:10:08.584279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.990 [2024-11-18 13:10:08.584340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.990 [2024-11-18 13:10:08.584358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.990 [2024-11-18 13:10:08.584366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.990 [2024-11-18 13:10:08.584373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.990 [2024-11-18 13:10:08.584389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.990 qpair failed and we were unable to recover it. 00:27:10.990 [2024-11-18 13:10:08.594196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.990 [2024-11-18 13:10:08.594251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.990 [2024-11-18 13:10:08.594265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.990 [2024-11-18 13:10:08.594273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.990 [2024-11-18 13:10:08.594279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.990 [2024-11-18 13:10:08.594294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.990 qpair failed and we were unable to recover it. 00:27:10.990 [2024-11-18 13:10:08.604229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.990 [2024-11-18 13:10:08.604288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.990 [2024-11-18 13:10:08.604301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.990 [2024-11-18 13:10:08.604309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.990 [2024-11-18 13:10:08.604316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.990 [2024-11-18 13:10:08.604331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.990 qpair failed and we were unable to recover it. 00:27:10.990 [2024-11-18 13:10:08.614309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.990 [2024-11-18 13:10:08.614363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.990 [2024-11-18 13:10:08.614379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.990 [2024-11-18 13:10:08.614390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.990 [2024-11-18 13:10:08.614397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.990 [2024-11-18 13:10:08.614412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.990 qpair failed and we were unable to recover it. 00:27:10.990 [2024-11-18 13:10:08.624421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.990 [2024-11-18 13:10:08.624480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.990 [2024-11-18 13:10:08.624494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.990 [2024-11-18 13:10:08.624502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.990 [2024-11-18 13:10:08.624509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.990 [2024-11-18 13:10:08.624525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.990 qpair failed and we were unable to recover it. 00:27:10.990 [2024-11-18 13:10:08.634406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.991 [2024-11-18 13:10:08.634463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.991 [2024-11-18 13:10:08.634477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.991 [2024-11-18 13:10:08.634484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.991 [2024-11-18 13:10:08.634491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.991 [2024-11-18 13:10:08.634506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.991 qpair failed and we were unable to recover it. 00:27:10.991 [2024-11-18 13:10:08.644430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.991 [2024-11-18 13:10:08.644484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.991 [2024-11-18 13:10:08.644498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.991 [2024-11-18 13:10:08.644506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.991 [2024-11-18 13:10:08.644513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.991 [2024-11-18 13:10:08.644529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.991 qpair failed and we were unable to recover it. 00:27:10.991 [2024-11-18 13:10:08.654384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.991 [2024-11-18 13:10:08.654446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.991 [2024-11-18 13:10:08.654460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.991 [2024-11-18 13:10:08.654469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.991 [2024-11-18 13:10:08.654475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.991 [2024-11-18 13:10:08.654491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.991 qpair failed and we were unable to recover it. 00:27:10.991 [2024-11-18 13:10:08.664416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.991 [2024-11-18 13:10:08.664474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.991 [2024-11-18 13:10:08.664491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.991 [2024-11-18 13:10:08.664499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.991 [2024-11-18 13:10:08.664506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.991 [2024-11-18 13:10:08.664522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.991 qpair failed and we were unable to recover it. 00:27:10.991 [2024-11-18 13:10:08.674502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.991 [2024-11-18 13:10:08.674557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.991 [2024-11-18 13:10:08.674572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.991 [2024-11-18 13:10:08.674579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.991 [2024-11-18 13:10:08.674586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.991 [2024-11-18 13:10:08.674602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.991 qpair failed and we were unable to recover it. 00:27:10.991 [2024-11-18 13:10:08.684558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.991 [2024-11-18 13:10:08.684633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.991 [2024-11-18 13:10:08.684647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.991 [2024-11-18 13:10:08.684654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.991 [2024-11-18 13:10:08.684660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:10.991 [2024-11-18 13:10:08.684676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.991 qpair failed and we were unable to recover it. 00:27:11.252 [2024-11-18 13:10:08.694570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.252 [2024-11-18 13:10:08.694628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.252 [2024-11-18 13:10:08.694643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.252 [2024-11-18 13:10:08.694650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.252 [2024-11-18 13:10:08.694657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.252 [2024-11-18 13:10:08.694673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.252 qpair failed and we were unable to recover it. 00:27:11.252 [2024-11-18 13:10:08.704602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.252 [2024-11-18 13:10:08.704660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.252 [2024-11-18 13:10:08.704674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.252 [2024-11-18 13:10:08.704682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.252 [2024-11-18 13:10:08.704689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.252 [2024-11-18 13:10:08.704704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.252 qpair failed and we were unable to recover it. 00:27:11.252 [2024-11-18 13:10:08.714665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.252 [2024-11-18 13:10:08.714727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.252 [2024-11-18 13:10:08.714741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.252 [2024-11-18 13:10:08.714748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.252 [2024-11-18 13:10:08.714755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.252 [2024-11-18 13:10:08.714769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.252 qpair failed and we were unable to recover it. 00:27:11.252 [2024-11-18 13:10:08.724655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.252 [2024-11-18 13:10:08.724720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.253 [2024-11-18 13:10:08.724734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.253 [2024-11-18 13:10:08.724742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.253 [2024-11-18 13:10:08.724748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.253 [2024-11-18 13:10:08.724762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.253 qpair failed and we were unable to recover it. 00:27:11.253 [2024-11-18 13:10:08.734619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.253 [2024-11-18 13:10:08.734679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.253 [2024-11-18 13:10:08.734696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.253 [2024-11-18 13:10:08.734704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.253 [2024-11-18 13:10:08.734711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.253 [2024-11-18 13:10:08.734727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.253 qpair failed and we were unable to recover it. 00:27:11.253 [2024-11-18 13:10:08.744653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.253 [2024-11-18 13:10:08.744709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.253 [2024-11-18 13:10:08.744724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.253 [2024-11-18 13:10:08.744734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.253 [2024-11-18 13:10:08.744741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.253 [2024-11-18 13:10:08.744756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.253 qpair failed and we were unable to recover it. 00:27:11.253 [2024-11-18 13:10:08.754766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.253 [2024-11-18 13:10:08.754826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.253 [2024-11-18 13:10:08.754840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.253 [2024-11-18 13:10:08.754847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.253 [2024-11-18 13:10:08.754854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.253 [2024-11-18 13:10:08.754869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.253 qpair failed and we were unable to recover it. 00:27:11.253 [2024-11-18 13:10:08.764771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.253 [2024-11-18 13:10:08.764823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.253 [2024-11-18 13:10:08.764838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.253 [2024-11-18 13:10:08.764845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.253 [2024-11-18 13:10:08.764852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.253 [2024-11-18 13:10:08.764867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.253 qpair failed and we were unable to recover it. 00:27:11.253 [2024-11-18 13:10:08.774810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.253 [2024-11-18 13:10:08.774863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.253 [2024-11-18 13:10:08.774877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.253 [2024-11-18 13:10:08.774884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.253 [2024-11-18 13:10:08.774891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.253 [2024-11-18 13:10:08.774906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.253 qpair failed and we were unable to recover it. 00:27:11.253 [2024-11-18 13:10:08.784774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.253 [2024-11-18 13:10:08.784831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.253 [2024-11-18 13:10:08.784845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.253 [2024-11-18 13:10:08.784853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.253 [2024-11-18 13:10:08.784860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.253 [2024-11-18 13:10:08.784879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.253 qpair failed and we were unable to recover it. 00:27:11.253 [2024-11-18 13:10:08.794797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.253 [2024-11-18 13:10:08.794855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.253 [2024-11-18 13:10:08.794869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.253 [2024-11-18 13:10:08.794877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.253 [2024-11-18 13:10:08.794884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.253 [2024-11-18 13:10:08.794899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.253 qpair failed and we were unable to recover it. 00:27:11.253 [2024-11-18 13:10:08.804820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.253 [2024-11-18 13:10:08.804899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.253 [2024-11-18 13:10:08.804914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.253 [2024-11-18 13:10:08.804921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.253 [2024-11-18 13:10:08.804928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.253 [2024-11-18 13:10:08.804943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.253 qpair failed and we were unable to recover it. 00:27:11.253 [2024-11-18 13:10:08.814933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.253 [2024-11-18 13:10:08.814993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.253 [2024-11-18 13:10:08.815007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.253 [2024-11-18 13:10:08.815015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.253 [2024-11-18 13:10:08.815022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.253 [2024-11-18 13:10:08.815037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.253 qpair failed and we were unable to recover it. 00:27:11.253 [2024-11-18 13:10:08.824945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.253 [2024-11-18 13:10:08.825001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.253 [2024-11-18 13:10:08.825015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.253 [2024-11-18 13:10:08.825022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.253 [2024-11-18 13:10:08.825029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.253 [2024-11-18 13:10:08.825044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.253 qpair failed and we were unable to recover it. 00:27:11.253 [2024-11-18 13:10:08.834979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.253 [2024-11-18 13:10:08.835057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.253 [2024-11-18 13:10:08.835073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.254 [2024-11-18 13:10:08.835080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.254 [2024-11-18 13:10:08.835087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.254 [2024-11-18 13:10:08.835103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.254 qpair failed and we were unable to recover it. 00:27:11.254 [2024-11-18 13:10:08.844967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.254 [2024-11-18 13:10:08.845025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.254 [2024-11-18 13:10:08.845039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.254 [2024-11-18 13:10:08.845047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.254 [2024-11-18 13:10:08.845053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.254 [2024-11-18 13:10:08.845068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.254 qpair failed and we were unable to recover it. 00:27:11.254 [2024-11-18 13:10:08.855023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.254 [2024-11-18 13:10:08.855071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.254 [2024-11-18 13:10:08.855085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.254 [2024-11-18 13:10:08.855093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.254 [2024-11-18 13:10:08.855100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.254 [2024-11-18 13:10:08.855115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.254 qpair failed and we were unable to recover it. 00:27:11.254 [2024-11-18 13:10:08.865059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.254 [2024-11-18 13:10:08.865117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.254 [2024-11-18 13:10:08.865132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.254 [2024-11-18 13:10:08.865140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.254 [2024-11-18 13:10:08.865147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.254 [2024-11-18 13:10:08.865163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.254 qpair failed and we were unable to recover it. 00:27:11.254 [2024-11-18 13:10:08.875086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.254 [2024-11-18 13:10:08.875142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.254 [2024-11-18 13:10:08.875159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.254 [2024-11-18 13:10:08.875166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.254 [2024-11-18 13:10:08.875173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.254 [2024-11-18 13:10:08.875188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.254 qpair failed and we were unable to recover it. 00:27:11.254 [2024-11-18 13:10:08.885146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.254 [2024-11-18 13:10:08.885204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.254 [2024-11-18 13:10:08.885218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.254 [2024-11-18 13:10:08.885226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.254 [2024-11-18 13:10:08.885233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.254 [2024-11-18 13:10:08.885248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.254 qpair failed and we were unable to recover it. 00:27:11.254 [2024-11-18 13:10:08.895062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.254 [2024-11-18 13:10:08.895111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.254 [2024-11-18 13:10:08.895125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.254 [2024-11-18 13:10:08.895132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.254 [2024-11-18 13:10:08.895139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.254 [2024-11-18 13:10:08.895155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.254 qpair failed and we were unable to recover it. 00:27:11.254 [2024-11-18 13:10:08.905173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.254 [2024-11-18 13:10:08.905233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.254 [2024-11-18 13:10:08.905248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.254 [2024-11-18 13:10:08.905255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.254 [2024-11-18 13:10:08.905262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.254 [2024-11-18 13:10:08.905277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.254 qpair failed and we were unable to recover it. 00:27:11.254 [2024-11-18 13:10:08.915191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.254 [2024-11-18 13:10:08.915248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.254 [2024-11-18 13:10:08.915263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.254 [2024-11-18 13:10:08.915270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.254 [2024-11-18 13:10:08.915280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.254 [2024-11-18 13:10:08.915295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.254 qpair failed and we were unable to recover it. 00:27:11.254 [2024-11-18 13:10:08.925187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.254 [2024-11-18 13:10:08.925280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.254 [2024-11-18 13:10:08.925295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.254 [2024-11-18 13:10:08.925303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.254 [2024-11-18 13:10:08.925309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.255 [2024-11-18 13:10:08.925324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.255 qpair failed and we were unable to recover it. 00:27:11.255 [2024-11-18 13:10:08.935262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.255 [2024-11-18 13:10:08.935318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.255 [2024-11-18 13:10:08.935332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.255 [2024-11-18 13:10:08.935339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.255 [2024-11-18 13:10:08.935346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.255 [2024-11-18 13:10:08.935365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.255 qpair failed and we were unable to recover it. 00:27:11.255 [2024-11-18 13:10:08.945267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.255 [2024-11-18 13:10:08.945322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.255 [2024-11-18 13:10:08.945336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.255 [2024-11-18 13:10:08.945343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.255 [2024-11-18 13:10:08.945350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.255 [2024-11-18 13:10:08.945370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.255 qpair failed and we were unable to recover it. 00:27:11.516 [2024-11-18 13:10:08.955300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.516 [2024-11-18 13:10:08.955363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.516 [2024-11-18 13:10:08.955378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.516 [2024-11-18 13:10:08.955386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.516 [2024-11-18 13:10:08.955393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.516 [2024-11-18 13:10:08.955409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.516 qpair failed and we were unable to recover it. 00:27:11.516 [2024-11-18 13:10:08.965325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.516 [2024-11-18 13:10:08.965383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.516 [2024-11-18 13:10:08.965397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.516 [2024-11-18 13:10:08.965405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.516 [2024-11-18 13:10:08.965411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.516 [2024-11-18 13:10:08.965427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.516 qpair failed and we were unable to recover it. 00:27:11.516 [2024-11-18 13:10:08.975329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.516 [2024-11-18 13:10:08.975386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.516 [2024-11-18 13:10:08.975400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.516 [2024-11-18 13:10:08.975407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.516 [2024-11-18 13:10:08.975414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.516 [2024-11-18 13:10:08.975429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.516 qpair failed and we were unable to recover it. 00:27:11.516 [2024-11-18 13:10:08.985402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.516 [2024-11-18 13:10:08.985462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.516 [2024-11-18 13:10:08.985475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.516 [2024-11-18 13:10:08.985483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.516 [2024-11-18 13:10:08.985490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.516 [2024-11-18 13:10:08.985505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.516 qpair failed and we were unable to recover it. 00:27:11.516 [2024-11-18 13:10:08.995418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.516 [2024-11-18 13:10:08.995472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.516 [2024-11-18 13:10:08.995486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.516 [2024-11-18 13:10:08.995493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.516 [2024-11-18 13:10:08.995501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.516 [2024-11-18 13:10:08.995516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.516 qpair failed and we were unable to recover it. 00:27:11.516 [2024-11-18 13:10:09.005441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.516 [2024-11-18 13:10:09.005494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.516 [2024-11-18 13:10:09.005514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.516 [2024-11-18 13:10:09.005522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.516 [2024-11-18 13:10:09.005529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.516 [2024-11-18 13:10:09.005545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.516 qpair failed and we were unable to recover it. 00:27:11.516 [2024-11-18 13:10:09.015396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.516 [2024-11-18 13:10:09.015449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.516 [2024-11-18 13:10:09.015463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.516 [2024-11-18 13:10:09.015470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.516 [2024-11-18 13:10:09.015478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.516 [2024-11-18 13:10:09.015494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.516 qpair failed and we were unable to recover it. 00:27:11.516 [2024-11-18 13:10:09.025499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.516 [2024-11-18 13:10:09.025564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.516 [2024-11-18 13:10:09.025577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.516 [2024-11-18 13:10:09.025585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.516 [2024-11-18 13:10:09.025592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.516 [2024-11-18 13:10:09.025607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.516 qpair failed and we were unable to recover it. 00:27:11.516 [2024-11-18 13:10:09.035527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.516 [2024-11-18 13:10:09.035582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.516 [2024-11-18 13:10:09.035596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.516 [2024-11-18 13:10:09.035604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.516 [2024-11-18 13:10:09.035611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.516 [2024-11-18 13:10:09.035627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.516 qpair failed and we were unable to recover it. 00:27:11.516 [2024-11-18 13:10:09.045562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.516 [2024-11-18 13:10:09.045614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.516 [2024-11-18 13:10:09.045628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.516 [2024-11-18 13:10:09.045635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.516 [2024-11-18 13:10:09.045646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.516 [2024-11-18 13:10:09.045661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.516 qpair failed and we were unable to recover it. 00:27:11.516 [2024-11-18 13:10:09.055613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.516 [2024-11-18 13:10:09.055670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.516 [2024-11-18 13:10:09.055684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.516 [2024-11-18 13:10:09.055692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.516 [2024-11-18 13:10:09.055698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.516 [2024-11-18 13:10:09.055714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.516 qpair failed and we were unable to recover it. 00:27:11.516 [2024-11-18 13:10:09.065609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.516 [2024-11-18 13:10:09.065664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.516 [2024-11-18 13:10:09.065677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.516 [2024-11-18 13:10:09.065685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.517 [2024-11-18 13:10:09.065692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.517 [2024-11-18 13:10:09.065707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.517 qpair failed and we were unable to recover it. 00:27:11.517 [2024-11-18 13:10:09.075612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.517 [2024-11-18 13:10:09.075669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.517 [2024-11-18 13:10:09.075683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.517 [2024-11-18 13:10:09.075690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.517 [2024-11-18 13:10:09.075697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.517 [2024-11-18 13:10:09.075712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.517 qpair failed and we were unable to recover it. 00:27:11.517 [2024-11-18 13:10:09.085611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.517 [2024-11-18 13:10:09.085673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.517 [2024-11-18 13:10:09.085686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.517 [2024-11-18 13:10:09.085693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.517 [2024-11-18 13:10:09.085700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.517 [2024-11-18 13:10:09.085715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.517 qpair failed and we were unable to recover it. 00:27:11.517 [2024-11-18 13:10:09.095679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.517 [2024-11-18 13:10:09.095735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.517 [2024-11-18 13:10:09.095750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.517 [2024-11-18 13:10:09.095758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.517 [2024-11-18 13:10:09.095764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.517 [2024-11-18 13:10:09.095779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.517 qpair failed and we were unable to recover it. 00:27:11.517 [2024-11-18 13:10:09.105760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.517 [2024-11-18 13:10:09.105862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.517 [2024-11-18 13:10:09.105878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.517 [2024-11-18 13:10:09.105885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.517 [2024-11-18 13:10:09.105892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.517 [2024-11-18 13:10:09.105908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.517 qpair failed and we were unable to recover it. 00:27:11.517 [2024-11-18 13:10:09.115737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.517 [2024-11-18 13:10:09.115795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.517 [2024-11-18 13:10:09.115809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.517 [2024-11-18 13:10:09.115816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.517 [2024-11-18 13:10:09.115823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.517 [2024-11-18 13:10:09.115838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.517 qpair failed and we were unable to recover it. 00:27:11.517 [2024-11-18 13:10:09.125764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.517 [2024-11-18 13:10:09.125819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.517 [2024-11-18 13:10:09.125832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.517 [2024-11-18 13:10:09.125839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.517 [2024-11-18 13:10:09.125846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.517 [2024-11-18 13:10:09.125862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.517 qpair failed and we were unable to recover it. 00:27:11.517 [2024-11-18 13:10:09.135806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.517 [2024-11-18 13:10:09.135870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.517 [2024-11-18 13:10:09.135888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.517 [2024-11-18 13:10:09.135895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.517 [2024-11-18 13:10:09.135902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.517 [2024-11-18 13:10:09.135917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.517 qpair failed and we were unable to recover it. 00:27:11.517 [2024-11-18 13:10:09.145873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.517 [2024-11-18 13:10:09.145933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.517 [2024-11-18 13:10:09.145947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.517 [2024-11-18 13:10:09.145954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.517 [2024-11-18 13:10:09.145961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.517 [2024-11-18 13:10:09.145976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.517 qpair failed and we were unable to recover it. 00:27:11.517 [2024-11-18 13:10:09.155894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.517 [2024-11-18 13:10:09.155952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.517 [2024-11-18 13:10:09.155966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.517 [2024-11-18 13:10:09.155974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.517 [2024-11-18 13:10:09.155980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.517 [2024-11-18 13:10:09.155996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.517 qpair failed and we were unable to recover it. 00:27:11.517 [2024-11-18 13:10:09.165885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.517 [2024-11-18 13:10:09.165938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.517 [2024-11-18 13:10:09.165952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.517 [2024-11-18 13:10:09.165959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.517 [2024-11-18 13:10:09.165966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.517 [2024-11-18 13:10:09.165981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.517 qpair failed and we were unable to recover it. 00:27:11.517 [2024-11-18 13:10:09.175966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.517 [2024-11-18 13:10:09.176022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.517 [2024-11-18 13:10:09.176035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.517 [2024-11-18 13:10:09.176045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.517 [2024-11-18 13:10:09.176052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.517 [2024-11-18 13:10:09.176068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.517 qpair failed and we were unable to recover it. 00:27:11.517 [2024-11-18 13:10:09.185885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.517 [2024-11-18 13:10:09.185950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.517 [2024-11-18 13:10:09.185964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.517 [2024-11-18 13:10:09.185973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.517 [2024-11-18 13:10:09.185979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.517 [2024-11-18 13:10:09.185994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.517 qpair failed and we were unable to recover it. 00:27:11.517 [2024-11-18 13:10:09.195971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.517 [2024-11-18 13:10:09.196028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.517 [2024-11-18 13:10:09.196042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.517 [2024-11-18 13:10:09.196049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.517 [2024-11-18 13:10:09.196056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.518 [2024-11-18 13:10:09.196072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.518 qpair failed and we were unable to recover it. 00:27:11.518 [2024-11-18 13:10:09.206002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.518 [2024-11-18 13:10:09.206067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.518 [2024-11-18 13:10:09.206081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.518 [2024-11-18 13:10:09.206089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.518 [2024-11-18 13:10:09.206095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.518 [2024-11-18 13:10:09.206110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.518 qpair failed and we were unable to recover it. 00:27:11.778 [2024-11-18 13:10:09.216014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.778 [2024-11-18 13:10:09.216068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.778 [2024-11-18 13:10:09.216083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.778 [2024-11-18 13:10:09.216090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.778 [2024-11-18 13:10:09.216097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.778 [2024-11-18 13:10:09.216113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.778 qpair failed and we were unable to recover it. 00:27:11.778 [2024-11-18 13:10:09.226073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.778 [2024-11-18 13:10:09.226129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.778 [2024-11-18 13:10:09.226143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.778 [2024-11-18 13:10:09.226150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.778 [2024-11-18 13:10:09.226157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.778 [2024-11-18 13:10:09.226173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.778 qpair failed and we were unable to recover it. 00:27:11.778 [2024-11-18 13:10:09.236107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.778 [2024-11-18 13:10:09.236162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.778 [2024-11-18 13:10:09.236176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.778 [2024-11-18 13:10:09.236183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.778 [2024-11-18 13:10:09.236190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.778 [2024-11-18 13:10:09.236206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.778 qpair failed and we were unable to recover it. 00:27:11.778 [2024-11-18 13:10:09.246129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.778 [2024-11-18 13:10:09.246186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.778 [2024-11-18 13:10:09.246200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.778 [2024-11-18 13:10:09.246208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.778 [2024-11-18 13:10:09.246214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.778 [2024-11-18 13:10:09.246229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.778 qpair failed and we were unable to recover it. 00:27:11.778 [2024-11-18 13:10:09.256160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.778 [2024-11-18 13:10:09.256211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.778 [2024-11-18 13:10:09.256227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.778 [2024-11-18 13:10:09.256234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.778 [2024-11-18 13:10:09.256240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.778 [2024-11-18 13:10:09.256256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.778 qpair failed and we were unable to recover it. 00:27:11.778 [2024-11-18 13:10:09.266226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.778 [2024-11-18 13:10:09.266296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.778 [2024-11-18 13:10:09.266313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.778 [2024-11-18 13:10:09.266320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.778 [2024-11-18 13:10:09.266328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.778 [2024-11-18 13:10:09.266343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.778 qpair failed and we were unable to recover it. 00:27:11.778 [2024-11-18 13:10:09.276136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.778 [2024-11-18 13:10:09.276200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.778 [2024-11-18 13:10:09.276214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.779 [2024-11-18 13:10:09.276222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.779 [2024-11-18 13:10:09.276229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.779 [2024-11-18 13:10:09.276243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.779 qpair failed and we were unable to recover it. 00:27:11.779 [2024-11-18 13:10:09.286239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.779 [2024-11-18 13:10:09.286293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.779 [2024-11-18 13:10:09.286307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.779 [2024-11-18 13:10:09.286314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.779 [2024-11-18 13:10:09.286321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.779 [2024-11-18 13:10:09.286336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.779 qpair failed and we were unable to recover it. 00:27:11.779 [2024-11-18 13:10:09.296272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.779 [2024-11-18 13:10:09.296325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.779 [2024-11-18 13:10:09.296341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.779 [2024-11-18 13:10:09.296349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.779 [2024-11-18 13:10:09.296359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.779 [2024-11-18 13:10:09.296376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.779 qpair failed and we were unable to recover it. 00:27:11.779 [2024-11-18 13:10:09.306303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.779 [2024-11-18 13:10:09.306359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.779 [2024-11-18 13:10:09.306374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.779 [2024-11-18 13:10:09.306384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.779 [2024-11-18 13:10:09.306392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.779 [2024-11-18 13:10:09.306407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.779 qpair failed and we were unable to recover it. 00:27:11.779 [2024-11-18 13:10:09.316332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.779 [2024-11-18 13:10:09.316391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.779 [2024-11-18 13:10:09.316406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.779 [2024-11-18 13:10:09.316413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.779 [2024-11-18 13:10:09.316420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.779 [2024-11-18 13:10:09.316436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.779 qpair failed and we were unable to recover it. 00:27:11.779 [2024-11-18 13:10:09.326349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.779 [2024-11-18 13:10:09.326408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.779 [2024-11-18 13:10:09.326422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.779 [2024-11-18 13:10:09.326429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.779 [2024-11-18 13:10:09.326436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.779 [2024-11-18 13:10:09.326452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.779 qpair failed and we were unable to recover it. 00:27:11.779 [2024-11-18 13:10:09.336417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.779 [2024-11-18 13:10:09.336478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.779 [2024-11-18 13:10:09.336492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.779 [2024-11-18 13:10:09.336499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.779 [2024-11-18 13:10:09.336507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.779 [2024-11-18 13:10:09.336522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.779 qpair failed and we were unable to recover it. 00:27:11.779 [2024-11-18 13:10:09.346401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.779 [2024-11-18 13:10:09.346479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.779 [2024-11-18 13:10:09.346494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.779 [2024-11-18 13:10:09.346501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.779 [2024-11-18 13:10:09.346507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.779 [2024-11-18 13:10:09.346526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.779 qpair failed and we were unable to recover it. 00:27:11.779 [2024-11-18 13:10:09.356460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.779 [2024-11-18 13:10:09.356522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.779 [2024-11-18 13:10:09.356537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.779 [2024-11-18 13:10:09.356544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.779 [2024-11-18 13:10:09.356551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.779 [2024-11-18 13:10:09.356566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.779 qpair failed and we were unable to recover it. 00:27:11.779 [2024-11-18 13:10:09.366398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.779 [2024-11-18 13:10:09.366454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.779 [2024-11-18 13:10:09.366468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.779 [2024-11-18 13:10:09.366475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.779 [2024-11-18 13:10:09.366482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.779 [2024-11-18 13:10:09.366497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.779 qpair failed and we were unable to recover it. 00:27:11.779 [2024-11-18 13:10:09.376521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.779 [2024-11-18 13:10:09.376579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.779 [2024-11-18 13:10:09.376593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.779 [2024-11-18 13:10:09.376601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.779 [2024-11-18 13:10:09.376607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.779 [2024-11-18 13:10:09.376622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.779 qpair failed and we were unable to recover it. 00:27:11.779 [2024-11-18 13:10:09.386523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.779 [2024-11-18 13:10:09.386580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.779 [2024-11-18 13:10:09.386594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.779 [2024-11-18 13:10:09.386601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.779 [2024-11-18 13:10:09.386608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.779 [2024-11-18 13:10:09.386623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.779 qpair failed and we were unable to recover it. 00:27:11.779 [2024-11-18 13:10:09.396581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.779 [2024-11-18 13:10:09.396660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.779 [2024-11-18 13:10:09.396675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.779 [2024-11-18 13:10:09.396683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.779 [2024-11-18 13:10:09.396689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.779 [2024-11-18 13:10:09.396705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.779 qpair failed and we were unable to recover it. 00:27:11.779 [2024-11-18 13:10:09.406600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.779 [2024-11-18 13:10:09.406654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.779 [2024-11-18 13:10:09.406669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.779 [2024-11-18 13:10:09.406676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.780 [2024-11-18 13:10:09.406683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.780 [2024-11-18 13:10:09.406699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.780 qpair failed and we were unable to recover it. 00:27:11.780 [2024-11-18 13:10:09.416628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.780 [2024-11-18 13:10:09.416681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.780 [2024-11-18 13:10:09.416695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.780 [2024-11-18 13:10:09.416702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.780 [2024-11-18 13:10:09.416709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.780 [2024-11-18 13:10:09.416725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.780 qpair failed and we were unable to recover it. 00:27:11.780 [2024-11-18 13:10:09.426663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.780 [2024-11-18 13:10:09.426722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.780 [2024-11-18 13:10:09.426736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.780 [2024-11-18 13:10:09.426743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.780 [2024-11-18 13:10:09.426749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.780 [2024-11-18 13:10:09.426764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.780 qpair failed and we were unable to recover it. 00:27:11.780 [2024-11-18 13:10:09.436688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.780 [2024-11-18 13:10:09.436742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.780 [2024-11-18 13:10:09.436758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.780 [2024-11-18 13:10:09.436766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.780 [2024-11-18 13:10:09.436773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.780 [2024-11-18 13:10:09.436788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.780 qpair failed and we were unable to recover it. 00:27:11.780 [2024-11-18 13:10:09.446701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.780 [2024-11-18 13:10:09.446752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.780 [2024-11-18 13:10:09.446765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.780 [2024-11-18 13:10:09.446772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.780 [2024-11-18 13:10:09.446779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.780 [2024-11-18 13:10:09.446794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.780 qpair failed and we were unable to recover it. 00:27:11.780 [2024-11-18 13:10:09.456755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.780 [2024-11-18 13:10:09.456814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.780 [2024-11-18 13:10:09.456828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.780 [2024-11-18 13:10:09.456835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.780 [2024-11-18 13:10:09.456842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.780 [2024-11-18 13:10:09.456857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.780 qpair failed and we were unable to recover it. 00:27:11.780 [2024-11-18 13:10:09.466749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.780 [2024-11-18 13:10:09.466808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.780 [2024-11-18 13:10:09.466821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.780 [2024-11-18 13:10:09.466829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.780 [2024-11-18 13:10:09.466835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:11.780 [2024-11-18 13:10:09.466851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.780 qpair failed and we were unable to recover it. 00:27:12.041 [2024-11-18 13:10:09.476804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.042 [2024-11-18 13:10:09.476907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.042 [2024-11-18 13:10:09.476921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.042 [2024-11-18 13:10:09.476929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.042 [2024-11-18 13:10:09.476940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.042 [2024-11-18 13:10:09.476954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.042 qpair failed and we were unable to recover it. 00:27:12.042 [2024-11-18 13:10:09.486798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.042 [2024-11-18 13:10:09.486859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.042 [2024-11-18 13:10:09.486873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.042 [2024-11-18 13:10:09.486881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.042 [2024-11-18 13:10:09.486888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.042 [2024-11-18 13:10:09.486903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.042 qpair failed and we were unable to recover it. 00:27:12.042 [2024-11-18 13:10:09.496850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.042 [2024-11-18 13:10:09.496902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.042 [2024-11-18 13:10:09.496916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.042 [2024-11-18 13:10:09.496923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.042 [2024-11-18 13:10:09.496930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.042 [2024-11-18 13:10:09.496946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.042 qpair failed and we were unable to recover it. 00:27:12.042 [2024-11-18 13:10:09.506885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.042 [2024-11-18 13:10:09.506939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.042 [2024-11-18 13:10:09.506953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.042 [2024-11-18 13:10:09.506960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.042 [2024-11-18 13:10:09.506968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.042 [2024-11-18 13:10:09.506983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.042 qpair failed and we were unable to recover it. 00:27:12.042 [2024-11-18 13:10:09.516903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.042 [2024-11-18 13:10:09.516957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.042 [2024-11-18 13:10:09.516972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.042 [2024-11-18 13:10:09.516980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.042 [2024-11-18 13:10:09.516987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.042 [2024-11-18 13:10:09.517002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.042 qpair failed and we were unable to recover it. 00:27:12.042 [2024-11-18 13:10:09.526938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.042 [2024-11-18 13:10:09.526992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.042 [2024-11-18 13:10:09.527006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.042 [2024-11-18 13:10:09.527013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.042 [2024-11-18 13:10:09.527019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.042 [2024-11-18 13:10:09.527035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.042 qpair failed and we were unable to recover it. 00:27:12.042 [2024-11-18 13:10:09.536994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.042 [2024-11-18 13:10:09.537044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.042 [2024-11-18 13:10:09.537058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.042 [2024-11-18 13:10:09.537065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.042 [2024-11-18 13:10:09.537072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.042 [2024-11-18 13:10:09.537087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.042 qpair failed and we were unable to recover it. 00:27:12.042 [2024-11-18 13:10:09.547002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.042 [2024-11-18 13:10:09.547057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.042 [2024-11-18 13:10:09.547071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.042 [2024-11-18 13:10:09.547078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.042 [2024-11-18 13:10:09.547085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.042 [2024-11-18 13:10:09.547100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.042 qpair failed and we were unable to recover it. 00:27:12.042 [2024-11-18 13:10:09.557029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.042 [2024-11-18 13:10:09.557084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.042 [2024-11-18 13:10:09.557098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.042 [2024-11-18 13:10:09.557105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.042 [2024-11-18 13:10:09.557112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.042 [2024-11-18 13:10:09.557128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.042 qpair failed and we were unable to recover it. 00:27:12.042 [2024-11-18 13:10:09.567052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.042 [2024-11-18 13:10:09.567107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.042 [2024-11-18 13:10:09.567124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.042 [2024-11-18 13:10:09.567131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.042 [2024-11-18 13:10:09.567138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.042 [2024-11-18 13:10:09.567153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.042 qpair failed and we were unable to recover it. 00:27:12.042 [2024-11-18 13:10:09.577084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.042 [2024-11-18 13:10:09.577133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.042 [2024-11-18 13:10:09.577147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.042 [2024-11-18 13:10:09.577154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.042 [2024-11-18 13:10:09.577161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.042 [2024-11-18 13:10:09.577177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.042 qpair failed and we were unable to recover it. 00:27:12.042 [2024-11-18 13:10:09.587112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.042 [2024-11-18 13:10:09.587175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.042 [2024-11-18 13:10:09.587189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.042 [2024-11-18 13:10:09.587197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.042 [2024-11-18 13:10:09.587203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.042 [2024-11-18 13:10:09.587219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.042 qpair failed and we were unable to recover it. 00:27:12.042 [2024-11-18 13:10:09.597127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.042 [2024-11-18 13:10:09.597186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.042 [2024-11-18 13:10:09.597201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.042 [2024-11-18 13:10:09.597210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.042 [2024-11-18 13:10:09.597217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.042 [2024-11-18 13:10:09.597233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.043 qpair failed and we were unable to recover it. 00:27:12.043 [2024-11-18 13:10:09.607086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.043 [2024-11-18 13:10:09.607141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.043 [2024-11-18 13:10:09.607155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.043 [2024-11-18 13:10:09.607163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.043 [2024-11-18 13:10:09.607174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.043 [2024-11-18 13:10:09.607190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.043 qpair failed and we were unable to recover it. 00:27:12.043 [2024-11-18 13:10:09.617190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.043 [2024-11-18 13:10:09.617244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.043 [2024-11-18 13:10:09.617258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.043 [2024-11-18 13:10:09.617265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.043 [2024-11-18 13:10:09.617272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.043 [2024-11-18 13:10:09.617288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.043 qpair failed and we were unable to recover it. 00:27:12.043 [2024-11-18 13:10:09.627257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.043 [2024-11-18 13:10:09.627316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.043 [2024-11-18 13:10:09.627330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.043 [2024-11-18 13:10:09.627338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.043 [2024-11-18 13:10:09.627345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.043 [2024-11-18 13:10:09.627365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.043 qpair failed and we were unable to recover it. 00:27:12.043 [2024-11-18 13:10:09.637256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.043 [2024-11-18 13:10:09.637311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.043 [2024-11-18 13:10:09.637325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.043 [2024-11-18 13:10:09.637333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.043 [2024-11-18 13:10:09.637340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.043 [2024-11-18 13:10:09.637360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.043 qpair failed and we were unable to recover it. 00:27:12.043 [2024-11-18 13:10:09.647328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.043 [2024-11-18 13:10:09.647387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.043 [2024-11-18 13:10:09.647401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.043 [2024-11-18 13:10:09.647410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.043 [2024-11-18 13:10:09.647416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.043 [2024-11-18 13:10:09.647431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.043 qpair failed and we were unable to recover it. 00:27:12.043 [2024-11-18 13:10:09.657310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.043 [2024-11-18 13:10:09.657400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.043 [2024-11-18 13:10:09.657414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.043 [2024-11-18 13:10:09.657421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.043 [2024-11-18 13:10:09.657427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.043 [2024-11-18 13:10:09.657442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.043 qpair failed and we were unable to recover it. 00:27:12.043 [2024-11-18 13:10:09.667360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.043 [2024-11-18 13:10:09.667428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.043 [2024-11-18 13:10:09.667442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.043 [2024-11-18 13:10:09.667450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.043 [2024-11-18 13:10:09.667457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad24000b90 00:27:12.043 [2024-11-18 13:10:09.667473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.043 qpair failed and we were unable to recover it. 00:27:12.043 [2024-11-18 13:10:09.677390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.043 [2024-11-18 13:10:09.677482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.043 [2024-11-18 13:10:09.677534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.043 [2024-11-18 13:10:09.677557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.043 [2024-11-18 13:10:09.677579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad1c000b90 00:27:12.043 [2024-11-18 13:10:09.677628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.043 qpair failed and we were unable to recover it. 00:27:12.043 [2024-11-18 13:10:09.687429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.043 [2024-11-18 13:10:09.687561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.043 [2024-11-18 13:10:09.687599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.043 [2024-11-18 13:10:09.687619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.043 [2024-11-18 13:10:09.687639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fad1c000b90 00:27:12.043 [2024-11-18 13:10:09.687682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.043 qpair failed and we were unable to recover it. 00:27:12.043 [2024-11-18 13:10:09.687882] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:12.043 A controller has encountered a failure and is being reset. 00:27:12.304 Controller properly reset. 00:27:12.304 Initializing NVMe Controllers 00:27:12.304 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:12.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:12.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:12.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:12.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:12.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:12.304 Initialization complete. Launching workers. 00:27:12.304 Starting thread on core 1 00:27:12.304 Starting thread on core 2 00:27:12.304 Starting thread on core 3 00:27:12.304 Starting thread on core 0 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:12.304 00:27:12.304 real 0m10.904s 00:27:12.304 user 0m18.995s 00:27:12.304 sys 0m4.749s 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.304 ************************************ 00:27:12.304 END TEST nvmf_target_disconnect_tc2 00:27:12.304 ************************************ 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:12.304 rmmod nvme_tcp 00:27:12.304 rmmod nvme_fabrics 00:27:12.304 rmmod nvme_keyring 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2483894 ']' 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2483894 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2483894 ']' 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 2483894 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2483894 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2483894' 00:27:12.304 killing process with pid 2483894 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 2483894 00:27:12.304 13:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 2483894 00:27:12.564 13:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:12.564 13:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:12.564 13:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:12.564 13:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:12.564 13:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:12.564 13:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:12.564 13:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:12.564 13:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:12.564 13:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:12.564 13:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.564 13:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.564 13:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.104 13:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:15.104 00:27:15.104 real 0m19.667s 00:27:15.104 user 0m47.135s 00:27:15.104 sys 0m9.693s 00:27:15.104 13:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:15.104 13:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:15.104 ************************************ 00:27:15.104 END TEST nvmf_target_disconnect 00:27:15.104 ************************************ 00:27:15.104 13:10:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:15.104 00:27:15.104 real 5m53.131s 00:27:15.104 user 10m37.322s 00:27:15.104 sys 1m58.718s 00:27:15.104 13:10:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:15.104 13:10:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.104 ************************************ 00:27:15.104 END TEST nvmf_host 00:27:15.104 ************************************ 00:27:15.104 13:10:12 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:15.104 13:10:12 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:15.104 13:10:12 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:15.104 13:10:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:15.104 13:10:12 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:15.104 13:10:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:15.104 ************************************ 00:27:15.104 START TEST nvmf_target_core_interrupt_mode 00:27:15.104 ************************************ 00:27:15.104 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:15.104 * Looking for test storage... 00:27:15.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:15.104 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:15.104 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:27:15.104 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:15.104 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:15.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.105 --rc genhtml_branch_coverage=1 00:27:15.105 --rc genhtml_function_coverage=1 00:27:15.105 --rc genhtml_legend=1 00:27:15.105 --rc geninfo_all_blocks=1 00:27:15.105 --rc geninfo_unexecuted_blocks=1 00:27:15.105 00:27:15.105 ' 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:15.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.105 --rc genhtml_branch_coverage=1 00:27:15.105 --rc genhtml_function_coverage=1 00:27:15.105 --rc genhtml_legend=1 00:27:15.105 --rc geninfo_all_blocks=1 00:27:15.105 --rc geninfo_unexecuted_blocks=1 00:27:15.105 00:27:15.105 ' 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:15.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.105 --rc genhtml_branch_coverage=1 00:27:15.105 --rc genhtml_function_coverage=1 00:27:15.105 --rc genhtml_legend=1 00:27:15.105 --rc geninfo_all_blocks=1 00:27:15.105 --rc geninfo_unexecuted_blocks=1 00:27:15.105 00:27:15.105 ' 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:15.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.105 --rc genhtml_branch_coverage=1 00:27:15.105 --rc genhtml_function_coverage=1 00:27:15.105 --rc genhtml_legend=1 00:27:15.105 --rc geninfo_all_blocks=1 00:27:15.105 --rc geninfo_unexecuted_blocks=1 00:27:15.105 00:27:15.105 ' 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:15.105 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:15.105 ************************************ 00:27:15.105 START TEST nvmf_abort 00:27:15.106 ************************************ 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:15.106 * Looking for test storage... 00:27:15.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:15.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.106 --rc genhtml_branch_coverage=1 00:27:15.106 --rc genhtml_function_coverage=1 00:27:15.106 --rc genhtml_legend=1 00:27:15.106 --rc geninfo_all_blocks=1 00:27:15.106 --rc geninfo_unexecuted_blocks=1 00:27:15.106 00:27:15.106 ' 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:15.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.106 --rc genhtml_branch_coverage=1 00:27:15.106 --rc genhtml_function_coverage=1 00:27:15.106 --rc genhtml_legend=1 00:27:15.106 --rc geninfo_all_blocks=1 00:27:15.106 --rc geninfo_unexecuted_blocks=1 00:27:15.106 00:27:15.106 ' 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:15.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.106 --rc genhtml_branch_coverage=1 00:27:15.106 --rc genhtml_function_coverage=1 00:27:15.106 --rc genhtml_legend=1 00:27:15.106 --rc geninfo_all_blocks=1 00:27:15.106 --rc geninfo_unexecuted_blocks=1 00:27:15.106 00:27:15.106 ' 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:15.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.106 --rc genhtml_branch_coverage=1 00:27:15.106 --rc genhtml_function_coverage=1 00:27:15.106 --rc genhtml_legend=1 00:27:15.106 --rc geninfo_all_blocks=1 00:27:15.106 --rc geninfo_unexecuted_blocks=1 00:27:15.106 00:27:15.106 ' 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.106 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:15.107 13:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:21.690 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:21.690 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:21.690 Found net devices under 0000:86:00.0: cvl_0_0 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:21.690 Found net devices under 0000:86:00.1: cvl_0_1 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:21.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:27:21.690 00:27:21.690 --- 10.0.0.2 ping statistics --- 00:27:21.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.690 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:27:21.690 00:27:21.690 --- 10.0.0.1 ping statistics --- 00:27:21.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.690 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2488472 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2488472 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2488472 ']' 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.690 [2024-11-18 13:10:18.704439] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:21.690 [2024-11-18 13:10:18.705474] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:27:21.690 [2024-11-18 13:10:18.705516] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.690 [2024-11-18 13:10:18.786466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:21.690 [2024-11-18 13:10:18.828528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.690 [2024-11-18 13:10:18.828566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.690 [2024-11-18 13:10:18.828573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.690 [2024-11-18 13:10:18.828579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.690 [2024-11-18 13:10:18.828584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.690 [2024-11-18 13:10:18.830041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.690 [2024-11-18 13:10:18.830148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.690 [2024-11-18 13:10:18.830149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.690 [2024-11-18 13:10:18.898317] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:21.690 [2024-11-18 13:10:18.899260] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:21.690 [2024-11-18 13:10:18.899425] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:21.690 [2024-11-18 13:10:18.899574] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:21.690 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:27:21.691 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:21.691 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:21.691 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.691 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.691 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:21.691 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.691 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.691 [2024-11-18 13:10:18.970855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.691 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.691 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:21.691 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.691 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.691 Malloc0 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.691 Delay0 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.691 [2024-11-18 13:10:19.074859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.691 13:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:21.691 [2024-11-18 13:10:19.199190] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:23.597 Initializing NVMe Controllers 00:27:23.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:23.597 controller IO queue size 128 less than required 00:27:23.597 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:23.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:23.597 Initialization complete. Launching workers. 00:27:23.597 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37075 00:27:23.597 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37136, failed to submit 66 00:27:23.597 success 37075, unsuccessful 61, failed 0 00:27:23.597 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:23.597 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.597 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.597 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.597 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:23.597 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:23.597 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:23.597 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:23.597 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:23.597 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:23.597 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:23.597 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:23.597 rmmod nvme_tcp 00:27:23.597 rmmod nvme_fabrics 00:27:23.597 rmmod nvme_keyring 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2488472 ']' 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2488472 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2488472 ']' 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2488472 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2488472 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2488472' 00:27:23.857 killing process with pid 2488472 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2488472 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2488472 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.857 13:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:26.394 00:27:26.394 real 0m11.073s 00:27:26.394 user 0m10.199s 00:27:26.394 sys 0m5.643s 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:26.394 ************************************ 00:27:26.394 END TEST nvmf_abort 00:27:26.394 ************************************ 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:26.394 ************************************ 00:27:26.394 START TEST nvmf_ns_hotplug_stress 00:27:26.394 ************************************ 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:26.394 * Looking for test storage... 00:27:26.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:26.394 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:26.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.394 --rc genhtml_branch_coverage=1 00:27:26.394 --rc genhtml_function_coverage=1 00:27:26.394 --rc genhtml_legend=1 00:27:26.394 --rc geninfo_all_blocks=1 00:27:26.395 --rc geninfo_unexecuted_blocks=1 00:27:26.395 00:27:26.395 ' 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:26.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.395 --rc genhtml_branch_coverage=1 00:27:26.395 --rc genhtml_function_coverage=1 00:27:26.395 --rc genhtml_legend=1 00:27:26.395 --rc geninfo_all_blocks=1 00:27:26.395 --rc geninfo_unexecuted_blocks=1 00:27:26.395 00:27:26.395 ' 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:26.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.395 --rc genhtml_branch_coverage=1 00:27:26.395 --rc genhtml_function_coverage=1 00:27:26.395 --rc genhtml_legend=1 00:27:26.395 --rc geninfo_all_blocks=1 00:27:26.395 --rc geninfo_unexecuted_blocks=1 00:27:26.395 00:27:26.395 ' 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:26.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.395 --rc genhtml_branch_coverage=1 00:27:26.395 --rc genhtml_function_coverage=1 00:27:26.395 --rc genhtml_legend=1 00:27:26.395 --rc geninfo_all_blocks=1 00:27:26.395 --rc geninfo_unexecuted_blocks=1 00:27:26.395 00:27:26.395 ' 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:26.395 13:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:32.969 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:32.969 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.969 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:32.969 Found net devices under 0000:86:00.0: cvl_0_0 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:32.970 Found net devices under 0000:86:00.1: cvl_0_1 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:32.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:27:32.970 00:27:32.970 --- 10.0.0.2 ping statistics --- 00:27:32.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.970 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:27:32.970 00:27:32.970 --- 10.0.0.1 ping statistics --- 00:27:32.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.970 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2492432 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2492432 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2492432 ']' 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:32.970 13:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:32.970 [2024-11-18 13:10:29.877493] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:32.970 [2024-11-18 13:10:29.878522] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:27:32.970 [2024-11-18 13:10:29.878560] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.970 [2024-11-18 13:10:29.963808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:32.970 [2024-11-18 13:10:30.004141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.970 [2024-11-18 13:10:30.004178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.970 [2024-11-18 13:10:30.004186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.970 [2024-11-18 13:10:30.004192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.970 [2024-11-18 13:10:30.004198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.970 [2024-11-18 13:10:30.005478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.970 [2024-11-18 13:10:30.005565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.970 [2024-11-18 13:10:30.005565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:32.970 [2024-11-18 13:10:30.075721] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:32.970 [2024-11-18 13:10:30.076557] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:32.970 [2024-11-18 13:10:30.076885] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:32.970 [2024-11-18 13:10:30.076969] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:32.970 13:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:32.970 13:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:27:32.971 13:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:32.971 13:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:32.971 13:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:32.971 13:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.971 13:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:32.971 13:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:32.971 [2024-11-18 13:10:30.314379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.971 13:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:32.971 13:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:33.230 [2024-11-18 13:10:30.742711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.230 13:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:33.490 13:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:33.490 Malloc0 00:27:33.490 13:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:33.749 Delay0 00:27:33.749 13:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:34.007 13:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:34.267 NULL1 00:27:34.267 13:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:34.526 13:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2492839 00:27:34.526 13:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:34.526 13:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:34.526 13:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:34.526 13:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:34.785 13:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:34.785 13:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:35.044 true 00:27:35.044 13:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:35.044 13:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:35.303 13:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:35.562 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:35.562 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:35.821 true 00:27:35.821 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:35.821 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:35.821 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:36.080 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:36.080 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:36.339 true 00:27:36.339 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:36.339 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:36.598 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:36.857 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:36.857 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:37.116 true 00:27:37.116 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:37.116 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.116 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:37.376 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:37.376 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:37.634 true 00:27:37.634 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:37.634 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.892 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.151 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:38.151 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:38.151 true 00:27:38.411 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:38.411 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.411 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.670 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:38.670 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:38.929 true 00:27:38.929 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:38.929 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:39.188 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:39.447 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:39.447 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:39.447 true 00:27:39.447 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:39.447 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:39.705 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:39.967 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:39.967 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:40.225 true 00:27:40.225 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:40.225 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.484 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:40.744 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:40.744 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:40.744 true 00:27:40.744 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:40.744 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.004 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:41.262 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:41.262 13:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:41.521 true 00:27:41.522 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:41.522 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.780 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.040 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:42.040 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:42.040 true 00:27:42.040 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:42.040 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.299 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.560 13:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:42.560 13:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:42.821 true 00:27:42.821 13:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:42.821 13:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.080 13:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.080 13:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:43.080 13:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:43.340 true 00:27:43.340 13:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:43.340 13:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.599 13:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.858 13:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:43.858 13:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:44.117 true 00:27:44.117 13:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:44.117 13:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.376 13:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.376 13:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:44.376 13:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:44.635 true 00:27:44.636 13:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:44.636 13:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.895 13:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:45.154 13:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:45.154 13:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:45.413 true 00:27:45.413 13:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:45.413 13:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.673 13:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:45.673 13:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:45.673 13:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:45.931 true 00:27:45.931 13:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:45.931 13:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.189 13:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.448 13:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:46.448 13:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:46.708 true 00:27:46.708 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:46.708 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.708 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.967 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:46.967 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:47.226 true 00:27:47.226 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:47.226 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.485 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.745 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:47.745 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:47.745 true 00:27:48.004 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:48.004 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.004 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.263 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:48.263 13:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:48.523 true 00:27:48.523 13:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:48.523 13:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.782 13:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.041 13:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:49.041 13:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:49.041 true 00:27:49.041 13:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:49.041 13:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.301 13:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.560 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:49.560 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:49.819 true 00:27:49.819 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:49.819 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.078 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.338 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:50.338 13:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:50.338 true 00:27:50.338 13:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:50.338 13:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.597 13:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.857 13:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:50.857 13:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:51.116 true 00:27:51.116 13:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:51.116 13:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.375 13:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.634 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:51.634 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:51.634 true 00:27:51.634 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:51.634 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.921 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.222 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:52.222 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:52.499 true 00:27:52.500 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:52.500 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.500 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.759 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:27:52.759 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:27:53.017 true 00:27:53.017 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:53.017 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.276 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.536 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:27:53.536 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:27:53.536 true 00:27:53.795 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:53.795 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.795 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.054 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:27:54.054 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:27:54.313 true 00:27:54.313 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:54.313 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.572 13:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.831 13:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:27:54.831 13:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:27:54.831 true 00:27:54.831 13:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:54.831 13:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.092 13:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.351 13:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:27:55.351 13:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:27:55.610 true 00:27:55.610 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:55.610 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.868 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.127 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:27:56.127 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:27:56.127 true 00:27:56.127 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:56.127 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.387 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.646 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:27:56.646 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:27:56.905 true 00:27:56.905 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:56.905 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.165 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.425 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:27:57.425 13:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:27:57.425 true 00:27:57.425 13:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:57.425 13:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.684 13:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.944 13:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:27:57.944 13:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:27:58.203 true 00:27:58.203 13:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:58.203 13:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.463 13:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.722 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:27:58.722 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:27:58.722 true 00:27:58.722 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:58.722 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.982 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.241 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:27:59.241 13:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:27:59.500 true 00:27:59.500 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:27:59.500 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.759 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.018 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:28:00.019 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:28:00.019 true 00:28:00.019 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:28:00.019 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.278 13:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.538 13:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:28:00.538 13:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:28:00.796 true 00:28:00.796 13:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:28:00.796 13:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.055 13:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.055 13:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:28:01.055 13:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:28:01.315 true 00:28:01.315 13:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:28:01.315 13:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.573 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.833 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:28:01.833 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:28:02.092 true 00:28:02.092 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:28:02.092 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.352 13:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.352 13:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:28:02.352 13:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:28:02.611 true 00:28:02.611 13:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:28:02.611 13:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.869 13:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.129 13:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:28:03.129 13:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:28:03.388 true 00:28:03.388 13:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:28:03.388 13:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.647 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.647 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:28:03.647 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:28:03.906 true 00:28:03.906 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:28:03.906 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.165 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.423 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:28:04.423 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:28:04.682 true 00:28:04.682 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:28:04.682 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.682 Initializing NVMe Controllers 00:28:04.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:04.683 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:28:04.683 Controller IO queue size 128, less than required. 00:28:04.683 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:04.683 WARNING: Some requested NVMe devices were skipped 00:28:04.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:04.683 Initialization complete. Launching workers. 00:28:04.683 ======================================================== 00:28:04.683 Latency(us) 00:28:04.683 Device Information : IOPS MiB/s Average min max 00:28:04.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27103.68 13.23 4722.68 1342.39 44153.33 00:28:04.683 ======================================================== 00:28:04.683 Total : 27103.68 13.23 4722.68 1342.39 44153.33 00:28:04.683 00:28:04.942 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.942 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:28:04.942 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:28:05.201 true 00:28:05.201 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2492839 00:28:05.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2492839) - No such process 00:28:05.201 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2492839 00:28:05.201 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.460 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:05.718 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:05.718 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:05.718 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:05.718 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:05.718 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:05.718 null0 00:28:05.718 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:05.718 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:05.718 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:05.977 null1 00:28:05.977 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:05.977 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:05.977 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:06.236 null2 00:28:06.236 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:06.236 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:06.236 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:06.495 null3 00:28:06.495 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:06.495 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:06.495 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:06.495 null4 00:28:06.495 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:06.495 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:06.495 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:06.754 null5 00:28:06.754 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:06.754 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:06.754 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:07.014 null6 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:07.014 null7 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.014 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2498099 2498101 2498102 2498104 2498106 2498108 2498110 2498112 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.015 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:07.274 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.274 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:07.274 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:07.274 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:07.274 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:07.274 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:07.274 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:07.274 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.534 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:07.793 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.794 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:07.794 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:07.794 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:07.794 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:07.794 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:07.794 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:07.794 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:08.053 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.313 13:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:08.572 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:08.572 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.572 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:08.572 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:08.572 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:08.572 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:08.572 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:08.572 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:08.831 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.832 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.832 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:08.832 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.832 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.832 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:09.091 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:09.091 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.091 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:09.091 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:09.091 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:09.091 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:09.091 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:09.091 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:09.350 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:09.350 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.350 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:09.350 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:09.350 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:09.350 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:09.350 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:09.350 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:09.609 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.609 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.609 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:09.609 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.609 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.609 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.610 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:09.869 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.869 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:09.869 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:09.869 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:09.869 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:09.869 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:09.869 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:09.869 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.129 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:10.389 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:10.389 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:10.389 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:10.389 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.389 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:10.389 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:10.389 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:10.389 13:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.389 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:10.648 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:10.648 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:10.649 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:10.649 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:10.649 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:10.649 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:10.649 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:10.649 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.908 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:11.167 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.167 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:11.167 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.167 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.167 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.167 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.168 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.168 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:11.427 rmmod nvme_tcp 00:28:11.427 rmmod nvme_fabrics 00:28:11.427 rmmod nvme_keyring 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2492432 ']' 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2492432 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2492432 ']' 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2492432 00:28:11.427 13:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:28:11.427 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:11.427 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2492432 00:28:11.427 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:11.427 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:11.427 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2492432' 00:28:11.427 killing process with pid 2492432 00:28:11.427 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2492432 00:28:11.427 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2492432 00:28:11.685 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:11.685 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:11.685 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:11.685 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:11.685 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:11.685 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:11.685 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:11.685 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:11.685 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:11.685 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.685 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.685 13:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:14.223 00:28:14.223 real 0m47.622s 00:28:14.223 user 3m4.069s 00:28:14.223 sys 0m21.454s 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:14.223 ************************************ 00:28:14.223 END TEST nvmf_ns_hotplug_stress 00:28:14.223 ************************************ 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:14.223 ************************************ 00:28:14.223 START TEST nvmf_delete_subsystem 00:28:14.223 ************************************ 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:14.223 * Looking for test storage... 00:28:14.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:14.223 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:14.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.224 --rc genhtml_branch_coverage=1 00:28:14.224 --rc genhtml_function_coverage=1 00:28:14.224 --rc genhtml_legend=1 00:28:14.224 --rc geninfo_all_blocks=1 00:28:14.224 --rc geninfo_unexecuted_blocks=1 00:28:14.224 00:28:14.224 ' 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:14.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.224 --rc genhtml_branch_coverage=1 00:28:14.224 --rc genhtml_function_coverage=1 00:28:14.224 --rc genhtml_legend=1 00:28:14.224 --rc geninfo_all_blocks=1 00:28:14.224 --rc geninfo_unexecuted_blocks=1 00:28:14.224 00:28:14.224 ' 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:14.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.224 --rc genhtml_branch_coverage=1 00:28:14.224 --rc genhtml_function_coverage=1 00:28:14.224 --rc genhtml_legend=1 00:28:14.224 --rc geninfo_all_blocks=1 00:28:14.224 --rc geninfo_unexecuted_blocks=1 00:28:14.224 00:28:14.224 ' 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:14.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.224 --rc genhtml_branch_coverage=1 00:28:14.224 --rc genhtml_function_coverage=1 00:28:14.224 --rc genhtml_legend=1 00:28:14.224 --rc geninfo_all_blocks=1 00:28:14.224 --rc geninfo_unexecuted_blocks=1 00:28:14.224 00:28:14.224 ' 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:14.224 13:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.794 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.794 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.794 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.794 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.794 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.794 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.794 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.794 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.794 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.794 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:20.794 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.794 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:20.794 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.794 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:20.795 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:20.795 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:20.795 Found net devices under 0000:86:00.0: cvl_0_0 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:20.795 Found net devices under 0000:86:00.1: cvl_0_1 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:20.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:28:20.795 00:28:20.795 --- 10.0.0.2 ping statistics --- 00:28:20.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.795 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:28:20.795 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:28:20.795 00:28:20.795 --- 10.0.0.1 ping statistics --- 00:28:20.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.796 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2502475 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2502475 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2502475 ']' 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.796 [2024-11-18 13:11:17.600512] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:20.796 [2024-11-18 13:11:17.601437] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:28:20.796 [2024-11-18 13:11:17.601469] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.796 [2024-11-18 13:11:17.681795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:20.796 [2024-11-18 13:11:17.724024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.796 [2024-11-18 13:11:17.724064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.796 [2024-11-18 13:11:17.724072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.796 [2024-11-18 13:11:17.724079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.796 [2024-11-18 13:11:17.724084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.796 [2024-11-18 13:11:17.725283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.796 [2024-11-18 13:11:17.725284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.796 [2024-11-18 13:11:17.793712] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:20.796 [2024-11-18 13:11:17.794334] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:20.796 [2024-11-18 13:11:17.794528] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.796 [2024-11-18 13:11:17.874029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.796 [2024-11-18 13:11:17.898310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.796 NULL1 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.796 Delay0 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2502535 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:20.796 13:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:20.796 [2024-11-18 13:11:18.001287] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:22.701 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:22.701 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.701 13:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 [2024-11-18 13:11:20.079304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13252c0 is same with the state(6) to be set 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Write completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 starting I/O failed: -6 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.701 Read completed with error (sct=0, sc=8) 00:28:22.702 starting I/O failed: -6 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 starting I/O failed: -6 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 starting I/O failed: -6 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 starting I/O failed: -6 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 starting I/O failed: -6 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 starting I/O failed: -6 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 starting I/O failed: -6 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 starting I/O failed: -6 00:28:22.702 [2024-11-18 13:11:20.080312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f987800d4d0 is same with the state(6) to be set 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Read completed with error (sct=0, sc=8) 00:28:22.702 Write completed with error (sct=0, sc=8) 00:28:23.640 [2024-11-18 13:11:21.055456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13269a0 is same with the state(6) to be set 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 [2024-11-18 13:11:21.084359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13254a0 is same with the state(6) to be set 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 [2024-11-18 13:11:21.084552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1325860 is same with the state(6) to be set 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 [2024-11-18 13:11:21.084650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f987800d800 is same with the state(6) to be set 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Read completed with error (sct=0, sc=8) 00:28:23.641 Write completed with error (sct=0, sc=8) 00:28:23.641 [2024-11-18 13:11:21.085487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f987800d020 is same with the state(6) to be set 00:28:23.641 Initializing NVMe Controllers 00:28:23.641 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:23.641 Controller IO queue size 128, less than required. 00:28:23.641 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:23.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:23.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:23.641 Initialization complete. Launching workers. 00:28:23.641 ======================================================== 00:28:23.641 Latency(us) 00:28:23.641 Device Information : IOPS MiB/s Average min max 00:28:23.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 188.37 0.09 896036.09 403.29 1010429.50 00:28:23.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.58 0.07 933816.88 245.57 1011914.17 00:28:23.641 ======================================================== 00:28:23.641 Total : 341.95 0.17 913004.50 245.57 1011914.17 00:28:23.641 00:28:23.641 [2024-11-18 13:11:21.086163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13269a0 (9): Bad file descriptor 00:28:23.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:23.641 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.641 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:23.641 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2502535 00:28:23.641 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2502535 00:28:23.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2502535) - No such process 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2502535 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2502535 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2502535 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:23.901 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.902 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.161 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.161 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.161 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.161 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.161 [2024-11-18 13:11:21.614385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.161 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.161 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.161 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.161 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.161 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.161 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2503186 00:28:24.161 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:24.161 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:24.161 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2503186 00:28:24.161 13:11:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:24.161 [2024-11-18 13:11:21.699291] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:24.729 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:24.729 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2503186 00:28:24.729 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:24.986 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:24.987 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2503186 00:28:24.987 13:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:25.553 13:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:25.553 13:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2503186 00:28:25.553 13:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:26.121 13:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:26.121 13:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2503186 00:28:26.121 13:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:26.689 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:26.689 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2503186 00:28:26.689 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:27.256 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:27.256 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2503186 00:28:27.256 13:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:27.256 Initializing NVMe Controllers 00:28:27.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.256 Controller IO queue size 128, less than required. 00:28:27.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:27.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:27.256 Initialization complete. Launching workers. 00:28:27.256 ======================================================== 00:28:27.256 Latency(us) 00:28:27.256 Device Information : IOPS MiB/s Average min max 00:28:27.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002284.98 1000118.40 1040991.75 00:28:27.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004175.99 1000216.02 1011963.79 00:28:27.256 ======================================================== 00:28:27.256 Total : 256.00 0.12 1003230.49 1000118.40 1040991.75 00:28:27.256 00:28:27.515 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:27.515 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2503186 00:28:27.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2503186) - No such process 00:28:27.515 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2503186 00:28:27.515 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:27.515 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:27.515 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:27.515 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:27.515 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:27.515 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:27.515 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:27.515 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:27.515 rmmod nvme_tcp 00:28:27.515 rmmod nvme_fabrics 00:28:27.515 rmmod nvme_keyring 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2502475 ']' 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2502475 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2502475 ']' 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2502475 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2502475 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2502475' 00:28:27.774 killing process with pid 2502475 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2502475 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2502475 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.774 13:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:30.311 00:28:30.311 real 0m16.125s 00:28:30.311 user 0m25.922s 00:28:30.311 sys 0m6.165s 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.311 ************************************ 00:28:30.311 END TEST nvmf_delete_subsystem 00:28:30.311 ************************************ 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:30.311 ************************************ 00:28:30.311 START TEST nvmf_host_management 00:28:30.311 ************************************ 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:30.311 * Looking for test storage... 00:28:30.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:30.311 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:30.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.312 --rc genhtml_branch_coverage=1 00:28:30.312 --rc genhtml_function_coverage=1 00:28:30.312 --rc genhtml_legend=1 00:28:30.312 --rc geninfo_all_blocks=1 00:28:30.312 --rc geninfo_unexecuted_blocks=1 00:28:30.312 00:28:30.312 ' 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:30.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.312 --rc genhtml_branch_coverage=1 00:28:30.312 --rc genhtml_function_coverage=1 00:28:30.312 --rc genhtml_legend=1 00:28:30.312 --rc geninfo_all_blocks=1 00:28:30.312 --rc geninfo_unexecuted_blocks=1 00:28:30.312 00:28:30.312 ' 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:30.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.312 --rc genhtml_branch_coverage=1 00:28:30.312 --rc genhtml_function_coverage=1 00:28:30.312 --rc genhtml_legend=1 00:28:30.312 --rc geninfo_all_blocks=1 00:28:30.312 --rc geninfo_unexecuted_blocks=1 00:28:30.312 00:28:30.312 ' 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:30.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.312 --rc genhtml_branch_coverage=1 00:28:30.312 --rc genhtml_function_coverage=1 00:28:30.312 --rc genhtml_legend=1 00:28:30.312 --rc geninfo_all_blocks=1 00:28:30.312 --rc geninfo_unexecuted_blocks=1 00:28:30.312 00:28:30.312 ' 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:30.312 13:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:36.883 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:36.883 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:36.883 Found net devices under 0000:86:00.0: cvl_0_0 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:36.883 Found net devices under 0000:86:00.1: cvl_0_1 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:36.883 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:36.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:28:36.884 00:28:36.884 --- 10.0.0.2 ping statistics --- 00:28:36.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.884 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:28:36.884 00:28:36.884 --- 10.0.0.1 ping statistics --- 00:28:36.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.884 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2507177 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2507177 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2507177 ']' 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:36.884 [2024-11-18 13:11:33.767350] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:36.884 [2024-11-18 13:11:33.768253] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:28:36.884 [2024-11-18 13:11:33.768285] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.884 [2024-11-18 13:11:33.848270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:36.884 [2024-11-18 13:11:33.889585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.884 [2024-11-18 13:11:33.889628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.884 [2024-11-18 13:11:33.889635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.884 [2024-11-18 13:11:33.889641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.884 [2024-11-18 13:11:33.889646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.884 [2024-11-18 13:11:33.891154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.884 [2024-11-18 13:11:33.891264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.884 [2024-11-18 13:11:33.891388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.884 [2024-11-18 13:11:33.891389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:36.884 [2024-11-18 13:11:33.959514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:36.884 [2024-11-18 13:11:33.960301] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:36.884 [2024-11-18 13:11:33.960551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:36.884 [2024-11-18 13:11:33.960956] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:36.884 [2024-11-18 13:11:33.960992] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:36.884 13:11:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:36.884 [2024-11-18 13:11:34.036059] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:36.884 Malloc0 00:28:36.884 [2024-11-18 13:11:34.128242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2507384 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2507384 /var/tmp/bdevperf.sock 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2507384 ']' 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:36.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.884 { 00:28:36.884 "params": { 00:28:36.884 "name": "Nvme$subsystem", 00:28:36.884 "trtype": "$TEST_TRANSPORT", 00:28:36.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.884 "adrfam": "ipv4", 00:28:36.884 "trsvcid": "$NVMF_PORT", 00:28:36.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.884 "hdgst": ${hdgst:-false}, 00:28:36.884 "ddgst": ${ddgst:-false} 00:28:36.884 }, 00:28:36.884 "method": "bdev_nvme_attach_controller" 00:28:36.884 } 00:28:36.884 EOF 00:28:36.884 )") 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:36.884 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:36.884 "params": { 00:28:36.884 "name": "Nvme0", 00:28:36.884 "trtype": "tcp", 00:28:36.884 "traddr": "10.0.0.2", 00:28:36.884 "adrfam": "ipv4", 00:28:36.884 "trsvcid": "4420", 00:28:36.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:36.885 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:36.885 "hdgst": false, 00:28:36.885 "ddgst": false 00:28:36.885 }, 00:28:36.885 "method": "bdev_nvme_attach_controller" 00:28:36.885 }' 00:28:36.885 [2024-11-18 13:11:34.227642] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:28:36.885 [2024-11-18 13:11:34.227691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507384 ] 00:28:36.885 [2024-11-18 13:11:34.306037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.885 [2024-11-18 13:11:34.347632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.144 Running I/O for 10 seconds... 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:28:37.144 13:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=735 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 735 -ge 100 ']' 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.404 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.404 [2024-11-18 13:11:35.071791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.071949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934fa0 is same with the state(6) to be set 00:28:37.404 [2024-11-18 13:11:35.072819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.404 [2024-11-18 13:11:35.072852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.404 [2024-11-18 13:11:35.072868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.404 [2024-11-18 13:11:35.072876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.404 [2024-11-18 13:11:35.072885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.072892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.072901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.072908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.072916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.072923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.072931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.072938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.072947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.072953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.072966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.072973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.072981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.072988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.072996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.405 [2024-11-18 13:11:35.073515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.405 [2024-11-18 13:11:35.073523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.406 [2024-11-18 13:11:35.073816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.073855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.406 [2024-11-18 13:11:35.074810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:37.406 task offset: 107136 on job bdev=Nvme0n1 fails 00:28:37.406 00:28:37.406 Latency(us) 00:28:37.406 [2024-11-18T12:11:35.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.406 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.406 Job: Nvme0n1 ended in about 0.41 seconds with error 00:28:37.406 Verification LBA range: start 0x0 length 0x400 00:28:37.406 Nvme0n1 : 0.41 2050.68 128.17 157.74 0.00 28182.02 1752.38 27468.13 00:28:37.406 [2024-11-18T12:11:35.108Z] =================================================================================================================== 00:28:37.406 [2024-11-18T12:11:35.108Z] Total : 2050.68 128.17 157.74 0.00 28182.02 1752.38 27468.13 00:28:37.406 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.406 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:37.406 [2024-11-18 13:11:35.077325] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:37.406 [2024-11-18 13:11:35.077347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb69500 (9): Bad file descriptor 00:28:37.406 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.406 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:37.406 [2024-11-18 13:11:35.078390] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:37.406 [2024-11-18 13:11:35.078457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:37.406 [2024-11-18 13:11:35.078479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.406 [2024-11-18 13:11:35.078491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:37.406 [2024-11-18 13:11:35.078499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:37.406 [2024-11-18 13:11:35.078506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.406 [2024-11-18 13:11:35.078512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb69500 00:28:37.406 [2024-11-18 13:11:35.078531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb69500 (9): Bad file descriptor 00:28:37.406 [2024-11-18 13:11:35.078543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:37.406 [2024-11-18 13:11:35.078549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:37.406 [2024-11-18 13:11:35.078558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:37.406 [2024-11-18 13:11:35.078566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:37.406 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.406 13:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:38.782 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2507384 00:28:38.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2507384) - No such process 00:28:38.782 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:38.782 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:38.782 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:38.782 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:38.782 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:38.782 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:38.782 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.782 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.782 { 00:28:38.782 "params": { 00:28:38.782 "name": "Nvme$subsystem", 00:28:38.782 "trtype": "$TEST_TRANSPORT", 00:28:38.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.782 "adrfam": "ipv4", 00:28:38.782 "trsvcid": "$NVMF_PORT", 00:28:38.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.782 "hdgst": ${hdgst:-false}, 00:28:38.782 "ddgst": ${ddgst:-false} 00:28:38.782 }, 00:28:38.782 "method": "bdev_nvme_attach_controller" 00:28:38.782 } 00:28:38.782 EOF 00:28:38.782 )") 00:28:38.782 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:38.782 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:38.782 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:38.782 13:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:38.782 "params": { 00:28:38.782 "name": "Nvme0", 00:28:38.782 "trtype": "tcp", 00:28:38.782 "traddr": "10.0.0.2", 00:28:38.782 "adrfam": "ipv4", 00:28:38.782 "trsvcid": "4420", 00:28:38.782 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:38.782 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:38.782 "hdgst": false, 00:28:38.782 "ddgst": false 00:28:38.782 }, 00:28:38.782 "method": "bdev_nvme_attach_controller" 00:28:38.782 }' 00:28:38.782 [2024-11-18 13:11:36.143704] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:28:38.782 [2024-11-18 13:11:36.143752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507684 ] 00:28:38.782 [2024-11-18 13:11:36.217602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.782 [2024-11-18 13:11:36.256433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.782 Running I/O for 1 seconds... 00:28:40.161 1984.00 IOPS, 124.00 MiB/s 00:28:40.161 Latency(us) 00:28:40.161 [2024-11-18T12:11:37.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.161 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.161 Verification LBA range: start 0x0 length 0x400 00:28:40.161 Nvme0n1 : 1.01 2020.95 126.31 0.00 0.00 31165.42 6952.51 27582.11 00:28:40.161 [2024-11-18T12:11:37.863Z] =================================================================================================================== 00:28:40.161 [2024-11-18T12:11:37.863Z] Total : 2020.95 126.31 0.00 0.00 31165.42 6952.51 27582.11 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:40.161 rmmod nvme_tcp 00:28:40.161 rmmod nvme_fabrics 00:28:40.161 rmmod nvme_keyring 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2507177 ']' 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2507177 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2507177 ']' 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2507177 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2507177 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2507177' 00:28:40.161 killing process with pid 2507177 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2507177 00:28:40.161 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2507177 00:28:40.420 [2024-11-18 13:11:37.913413] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:40.420 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:40.421 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:40.421 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:40.421 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:40.421 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:40.421 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:40.421 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:40.421 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:40.421 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:40.421 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.421 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.421 13:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.330 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.330 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:42.330 00:28:42.330 real 0m12.426s 00:28:42.330 user 0m18.181s 00:28:42.330 sys 0m6.410s 00:28:42.330 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:42.330 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:42.330 ************************************ 00:28:42.330 END TEST nvmf_host_management 00:28:42.330 ************************************ 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:42.590 ************************************ 00:28:42.590 START TEST nvmf_lvol 00:28:42.590 ************************************ 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:42.590 * Looking for test storage... 00:28:42.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:42.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.590 --rc genhtml_branch_coverage=1 00:28:42.590 --rc genhtml_function_coverage=1 00:28:42.590 --rc genhtml_legend=1 00:28:42.590 --rc geninfo_all_blocks=1 00:28:42.590 --rc geninfo_unexecuted_blocks=1 00:28:42.590 00:28:42.590 ' 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:42.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.590 --rc genhtml_branch_coverage=1 00:28:42.590 --rc genhtml_function_coverage=1 00:28:42.590 --rc genhtml_legend=1 00:28:42.590 --rc geninfo_all_blocks=1 00:28:42.590 --rc geninfo_unexecuted_blocks=1 00:28:42.590 00:28:42.590 ' 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:42.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.590 --rc genhtml_branch_coverage=1 00:28:42.590 --rc genhtml_function_coverage=1 00:28:42.590 --rc genhtml_legend=1 00:28:42.590 --rc geninfo_all_blocks=1 00:28:42.590 --rc geninfo_unexecuted_blocks=1 00:28:42.590 00:28:42.590 ' 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:42.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.590 --rc genhtml_branch_coverage=1 00:28:42.590 --rc genhtml_function_coverage=1 00:28:42.590 --rc genhtml_legend=1 00:28:42.590 --rc geninfo_all_blocks=1 00:28:42.590 --rc geninfo_unexecuted_blocks=1 00:28:42.590 00:28:42.590 ' 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.590 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.591 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.850 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.850 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:42.850 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:42.850 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:42.850 13:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:49.427 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:49.428 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:49.428 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:49.428 Found net devices under 0000:86:00.0: cvl_0_0 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:49.428 Found net devices under 0000:86:00.1: cvl_0_1 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:49.428 13:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:49.428 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:49.428 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:49.428 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:49.428 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:49.428 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:49.428 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:49.428 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:49.428 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:49.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:49.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:28:49.428 00:28:49.428 --- 10.0.0.2 ping statistics --- 00:28:49.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.429 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:49.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:49.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:28:49.429 00:28:49.429 --- 10.0.0.1 ping statistics --- 00:28:49.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.429 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2511446 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2511446 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2511446 ']' 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:49.429 [2024-11-18 13:11:46.258719] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:49.429 [2024-11-18 13:11:46.259634] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:28:49.429 [2024-11-18 13:11:46.259667] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:49.429 [2024-11-18 13:11:46.339591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:49.429 [2024-11-18 13:11:46.381629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:49.429 [2024-11-18 13:11:46.381666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:49.429 [2024-11-18 13:11:46.381674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:49.429 [2024-11-18 13:11:46.381680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:49.429 [2024-11-18 13:11:46.381686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:49.429 [2024-11-18 13:11:46.383074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.429 [2024-11-18 13:11:46.383179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.429 [2024-11-18 13:11:46.383181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:49.429 [2024-11-18 13:11:46.449791] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:49.429 [2024-11-18 13:11:46.450567] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:49.429 [2024-11-18 13:11:46.450849] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:49.429 [2024-11-18 13:11:46.450982] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:49.429 [2024-11-18 13:11:46.687938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:49.429 13:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:49.688 13:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:49.688 13:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:49.688 13:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:49.947 13:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2924253d-1e95-412f-9872-0247c9d880b9 00:28:49.947 13:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2924253d-1e95-412f-9872-0247c9d880b9 lvol 20 00:28:50.205 13:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=44c47063-ef70-46b3-ab49-06034ee6ea93 00:28:50.206 13:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:50.464 13:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 44c47063-ef70-46b3-ab49-06034ee6ea93 00:28:50.723 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:50.723 [2024-11-18 13:11:48.343861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.723 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:50.982 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2511750 00:28:50.982 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:50.982 13:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:51.919 13:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 44c47063-ef70-46b3-ab49-06034ee6ea93 MY_SNAPSHOT 00:28:52.177 13:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1d87279d-ab70-4ff3-afae-2fb82b509a59 00:28:52.177 13:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 44c47063-ef70-46b3-ab49-06034ee6ea93 30 00:28:52.436 13:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1d87279d-ab70-4ff3-afae-2fb82b509a59 MY_CLONE 00:28:52.695 13:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=82f77b42-9b54-4e24-9ede-dcb1f3a59d56 00:28:52.695 13:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 82f77b42-9b54-4e24-9ede-dcb1f3a59d56 00:28:53.265 13:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2511750 00:29:01.392 Initializing NVMe Controllers 00:29:01.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:01.392 Controller IO queue size 128, less than required. 00:29:01.392 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:01.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:01.392 Initialization complete. Launching workers. 00:29:01.392 ======================================================== 00:29:01.392 Latency(us) 00:29:01.392 Device Information : IOPS MiB/s Average min max 00:29:01.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12046.90 47.06 10624.52 1556.34 64199.85 00:29:01.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11998.70 46.87 10667.64 2488.40 68575.29 00:29:01.392 ======================================================== 00:29:01.392 Total : 24045.60 93.93 10646.04 1556.34 68575.29 00:29:01.392 00:29:01.392 13:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:01.651 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 44c47063-ef70-46b3-ab49-06034ee6ea93 00:29:01.651 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2924253d-1e95-412f-9872-0247c9d880b9 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:01.912 rmmod nvme_tcp 00:29:01.912 rmmod nvme_fabrics 00:29:01.912 rmmod nvme_keyring 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2511446 ']' 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2511446 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2511446 ']' 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2511446 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:01.912 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2511446 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2511446' 00:29:02.171 killing process with pid 2511446 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2511446 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2511446 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.171 13:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.709 13:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:04.709 00:29:04.709 real 0m21.813s 00:29:04.709 user 0m55.642s 00:29:04.709 sys 0m9.618s 00:29:04.709 13:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:04.709 13:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:04.709 ************************************ 00:29:04.709 END TEST nvmf_lvol 00:29:04.709 ************************************ 00:29:04.709 13:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:04.709 13:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:04.709 13:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:04.709 13:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:04.709 ************************************ 00:29:04.709 START TEST nvmf_lvs_grow 00:29:04.709 ************************************ 00:29:04.709 13:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:04.709 * Looking for test storage... 00:29:04.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:04.709 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:04.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.710 --rc genhtml_branch_coverage=1 00:29:04.710 --rc genhtml_function_coverage=1 00:29:04.710 --rc genhtml_legend=1 00:29:04.710 --rc geninfo_all_blocks=1 00:29:04.710 --rc geninfo_unexecuted_blocks=1 00:29:04.710 00:29:04.710 ' 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:04.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.710 --rc genhtml_branch_coverage=1 00:29:04.710 --rc genhtml_function_coverage=1 00:29:04.710 --rc genhtml_legend=1 00:29:04.710 --rc geninfo_all_blocks=1 00:29:04.710 --rc geninfo_unexecuted_blocks=1 00:29:04.710 00:29:04.710 ' 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:04.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.710 --rc genhtml_branch_coverage=1 00:29:04.710 --rc genhtml_function_coverage=1 00:29:04.710 --rc genhtml_legend=1 00:29:04.710 --rc geninfo_all_blocks=1 00:29:04.710 --rc geninfo_unexecuted_blocks=1 00:29:04.710 00:29:04.710 ' 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:04.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.710 --rc genhtml_branch_coverage=1 00:29:04.710 --rc genhtml_function_coverage=1 00:29:04.710 --rc genhtml_legend=1 00:29:04.710 --rc geninfo_all_blocks=1 00:29:04.710 --rc geninfo_unexecuted_blocks=1 00:29:04.710 00:29:04.710 ' 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:04.710 13:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:11.285 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:11.285 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:11.285 Found net devices under 0000:86:00.0: cvl_0_0 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:11.285 Found net devices under 0000:86:00.1: cvl_0_1 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:11.285 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.286 13:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:11.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:29:11.286 00:29:11.286 --- 10.0.0.2 ping statistics --- 00:29:11.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.286 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:29:11.286 00:29:11.286 --- 10.0.0.1 ping statistics --- 00:29:11.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.286 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2517584 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2517584 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2517584 ']' 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:11.286 [2024-11-18 13:12:08.203588] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:11.286 [2024-11-18 13:12:08.204509] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:29:11.286 [2024-11-18 13:12:08.204542] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.286 [2024-11-18 13:12:08.280055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.286 [2024-11-18 13:12:08.319715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.286 [2024-11-18 13:12:08.319753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.286 [2024-11-18 13:12:08.319760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.286 [2024-11-18 13:12:08.319766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.286 [2024-11-18 13:12:08.319771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.286 [2024-11-18 13:12:08.320318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.286 [2024-11-18 13:12:08.386573] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:11.286 [2024-11-18 13:12:08.386788] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:11.286 [2024-11-18 13:12:08.624980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:11.286 ************************************ 00:29:11.286 START TEST lvs_grow_clean 00:29:11.286 ************************************ 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:11.286 13:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:11.545 13:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9108c554-3c9b-4c8b-9c13-a56f5926bd44 00:29:11.545 13:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9108c554-3c9b-4c8b-9c13-a56f5926bd44 00:29:11.545 13:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:11.803 13:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:11.803 13:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:11.803 13:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9108c554-3c9b-4c8b-9c13-a56f5926bd44 lvol 150 00:29:12.061 13:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=86a0ee39-600d-42b7-9282-c302d331de08 00:29:12.061 13:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:12.061 13:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:12.061 [2024-11-18 13:12:09.720708] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:12.061 [2024-11-18 13:12:09.720839] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:12.061 true 00:29:12.061 13:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9108c554-3c9b-4c8b-9c13-a56f5926bd44 00:29:12.061 13:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:12.320 13:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:12.320 13:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:12.579 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 86a0ee39-600d-42b7-9282-c302d331de08 00:29:12.838 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:12.838 [2024-11-18 13:12:10.521213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.097 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:13.097 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2518084 00:29:13.097 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:13.097 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:13.097 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2518084 /var/tmp/bdevperf.sock 00:29:13.097 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2518084 ']' 00:29:13.097 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:13.097 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:13.097 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:13.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:13.097 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:13.097 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:13.097 [2024-11-18 13:12:10.780302] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:29:13.097 [2024-11-18 13:12:10.780357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518084 ] 00:29:13.357 [2024-11-18 13:12:10.856940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.357 [2024-11-18 13:12:10.899975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.357 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:13.357 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:29:13.357 13:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:13.616 Nvme0n1 00:29:13.616 13:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:13.875 [ 00:29:13.875 { 00:29:13.875 "name": "Nvme0n1", 00:29:13.875 "aliases": [ 00:29:13.875 "86a0ee39-600d-42b7-9282-c302d331de08" 00:29:13.875 ], 00:29:13.875 "product_name": "NVMe disk", 00:29:13.875 "block_size": 4096, 00:29:13.875 "num_blocks": 38912, 00:29:13.875 "uuid": "86a0ee39-600d-42b7-9282-c302d331de08", 00:29:13.875 "numa_id": 1, 00:29:13.875 "assigned_rate_limits": { 00:29:13.875 "rw_ios_per_sec": 0, 00:29:13.875 "rw_mbytes_per_sec": 0, 00:29:13.875 "r_mbytes_per_sec": 0, 00:29:13.875 "w_mbytes_per_sec": 0 00:29:13.875 }, 00:29:13.875 "claimed": false, 00:29:13.875 "zoned": false, 00:29:13.875 "supported_io_types": { 00:29:13.875 "read": true, 00:29:13.875 "write": true, 00:29:13.875 "unmap": true, 00:29:13.875 "flush": true, 00:29:13.875 "reset": true, 00:29:13.875 "nvme_admin": true, 00:29:13.875 "nvme_io": true, 00:29:13.875 "nvme_io_md": false, 00:29:13.875 "write_zeroes": true, 00:29:13.875 "zcopy": false, 00:29:13.875 "get_zone_info": false, 00:29:13.875 "zone_management": false, 00:29:13.875 "zone_append": false, 00:29:13.875 "compare": true, 00:29:13.875 "compare_and_write": true, 00:29:13.875 "abort": true, 00:29:13.875 "seek_hole": false, 00:29:13.875 "seek_data": false, 00:29:13.875 "copy": true, 00:29:13.875 "nvme_iov_md": false 00:29:13.875 }, 00:29:13.875 "memory_domains": [ 00:29:13.875 { 00:29:13.875 "dma_device_id": "system", 00:29:13.875 "dma_device_type": 1 00:29:13.875 } 00:29:13.875 ], 00:29:13.875 "driver_specific": { 00:29:13.875 "nvme": [ 00:29:13.875 { 00:29:13.875 "trid": { 00:29:13.875 "trtype": "TCP", 00:29:13.875 "adrfam": "IPv4", 00:29:13.875 "traddr": "10.0.0.2", 00:29:13.875 "trsvcid": "4420", 00:29:13.875 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:13.875 }, 00:29:13.875 "ctrlr_data": { 00:29:13.875 "cntlid": 1, 00:29:13.875 "vendor_id": "0x8086", 00:29:13.875 "model_number": "SPDK bdev Controller", 00:29:13.875 "serial_number": "SPDK0", 00:29:13.875 "firmware_revision": "25.01", 00:29:13.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:13.875 "oacs": { 00:29:13.875 "security": 0, 00:29:13.875 "format": 0, 00:29:13.875 "firmware": 0, 00:29:13.875 "ns_manage": 0 00:29:13.875 }, 00:29:13.875 "multi_ctrlr": true, 00:29:13.875 "ana_reporting": false 00:29:13.875 }, 00:29:13.875 "vs": { 00:29:13.875 "nvme_version": "1.3" 00:29:13.875 }, 00:29:13.875 "ns_data": { 00:29:13.875 "id": 1, 00:29:13.875 "can_share": true 00:29:13.875 } 00:29:13.875 } 00:29:13.875 ], 00:29:13.875 "mp_policy": "active_passive" 00:29:13.875 } 00:29:13.875 } 00:29:13.875 ] 00:29:13.875 13:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2518092 00:29:13.875 13:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:13.875 13:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:13.875 Running I/O for 10 seconds... 00:29:15.253 Latency(us) 00:29:15.253 [2024-11-18T12:12:12.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:15.253 Nvme0n1 : 1.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:29:15.253 [2024-11-18T12:12:12.955Z] =================================================================================================================== 00:29:15.253 [2024-11-18T12:12:12.955Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:29:15.253 00:29:15.818 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9108c554-3c9b-4c8b-9c13-a56f5926bd44 00:29:16.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:16.077 Nvme0n1 : 2.00 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:16.077 [2024-11-18T12:12:13.779Z] =================================================================================================================== 00:29:16.077 [2024-11-18T12:12:13.779Z] Total : 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:16.077 00:29:16.077 true 00:29:16.077 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9108c554-3c9b-4c8b-9c13-a56f5926bd44 00:29:16.077 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:16.336 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:16.336 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:16.336 13:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2518092 00:29:16.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:16.904 Nvme0n1 : 3.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:29:16.904 [2024-11-18T12:12:14.606Z] =================================================================================================================== 00:29:16.904 [2024-11-18T12:12:14.606Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:29:16.904 00:29:18.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:18.284 Nvme0n1 : 4.00 22828.25 89.17 0.00 0.00 0.00 0.00 0.00 00:29:18.284 [2024-11-18T12:12:15.986Z] =================================================================================================================== 00:29:18.284 [2024-11-18T12:12:15.986Z] Total : 22828.25 89.17 0.00 0.00 0.00 0.00 0.00 00:29:18.284 00:29:18.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:18.900 Nvme0n1 : 5.00 22885.40 89.40 0.00 0.00 0.00 0.00 0.00 00:29:18.900 [2024-11-18T12:12:16.602Z] =================================================================================================================== 00:29:18.900 [2024-11-18T12:12:16.602Z] Total : 22885.40 89.40 0.00 0.00 0.00 0.00 0.00 00:29:18.900 00:29:19.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:19.897 Nvme0n1 : 6.00 22944.67 89.63 0.00 0.00 0.00 0.00 0.00 00:29:19.897 [2024-11-18T12:12:17.599Z] =================================================================================================================== 00:29:19.897 [2024-11-18T12:12:17.599Z] Total : 22944.67 89.63 0.00 0.00 0.00 0.00 0.00 00:29:19.897 00:29:21.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.275 Nvme0n1 : 7.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:21.275 [2024-11-18T12:12:18.977Z] =================================================================================================================== 00:29:21.275 [2024-11-18T12:12:18.977Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:21.275 00:29:22.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:22.211 Nvme0n1 : 8.00 23010.88 89.89 0.00 0.00 0.00 0.00 0.00 00:29:22.211 [2024-11-18T12:12:19.913Z] =================================================================================================================== 00:29:22.211 [2024-11-18T12:12:19.913Z] Total : 23010.88 89.89 0.00 0.00 0.00 0.00 0.00 00:29:22.211 00:29:23.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:23.149 Nvme0n1 : 9.00 22994.11 89.82 0.00 0.00 0.00 0.00 0.00 00:29:23.149 [2024-11-18T12:12:20.851Z] =================================================================================================================== 00:29:23.149 [2024-11-18T12:12:20.851Z] Total : 22994.11 89.82 0.00 0.00 0.00 0.00 0.00 00:29:23.149 00:29:24.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:24.086 Nvme0n1 : 10.00 23018.80 89.92 0.00 0.00 0.00 0.00 0.00 00:29:24.086 [2024-11-18T12:12:21.788Z] =================================================================================================================== 00:29:24.086 [2024-11-18T12:12:21.788Z] Total : 23018.80 89.92 0.00 0.00 0.00 0.00 0.00 00:29:24.086 00:29:24.086 00:29:24.086 Latency(us) 00:29:24.086 [2024-11-18T12:12:21.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:24.086 Nvme0n1 : 10.00 23021.65 89.93 0.00 0.00 5556.87 3476.26 27240.18 00:29:24.086 [2024-11-18T12:12:21.789Z] =================================================================================================================== 00:29:24.087 [2024-11-18T12:12:21.789Z] Total : 23021.65 89.93 0.00 0.00 5556.87 3476.26 27240.18 00:29:24.087 { 00:29:24.087 "results": [ 00:29:24.087 { 00:29:24.087 "job": "Nvme0n1", 00:29:24.087 "core_mask": "0x2", 00:29:24.087 "workload": "randwrite", 00:29:24.087 "status": "finished", 00:29:24.087 "queue_depth": 128, 00:29:24.087 "io_size": 4096, 00:29:24.087 "runtime": 10.004322, 00:29:24.087 "iops": 23021.65004285148, 00:29:24.087 "mibps": 89.9283204798886, 00:29:24.087 "io_failed": 0, 00:29:24.087 "io_timeout": 0, 00:29:24.087 "avg_latency_us": 5556.8738199086765, 00:29:24.087 "min_latency_us": 3476.257391304348, 00:29:24.087 "max_latency_us": 27240.180869565218 00:29:24.087 } 00:29:24.087 ], 00:29:24.087 "core_count": 1 00:29:24.087 } 00:29:24.087 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2518084 00:29:24.087 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2518084 ']' 00:29:24.087 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2518084 00:29:24.087 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:29:24.087 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:24.087 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2518084 00:29:24.087 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:24.087 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:24.087 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2518084' 00:29:24.087 killing process with pid 2518084 00:29:24.087 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2518084 00:29:24.087 Received shutdown signal, test time was about 10.000000 seconds 00:29:24.087 00:29:24.087 Latency(us) 00:29:24.087 [2024-11-18T12:12:21.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.087 [2024-11-18T12:12:21.789Z] =================================================================================================================== 00:29:24.087 [2024-11-18T12:12:21.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.087 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2518084 00:29:24.346 13:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:24.346 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:24.605 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9108c554-3c9b-4c8b-9c13-a56f5926bd44 00:29:24.605 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:24.864 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:24.864 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:24.864 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:25.123 [2024-11-18 13:12:22.644772] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:25.123 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9108c554-3c9b-4c8b-9c13-a56f5926bd44 00:29:25.123 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:29:25.123 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9108c554-3c9b-4c8b-9c13-a56f5926bd44 00:29:25.124 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:25.124 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.124 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:25.124 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.124 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:25.124 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.124 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:25.124 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:25.124 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9108c554-3c9b-4c8b-9c13-a56f5926bd44 00:29:25.384 request: 00:29:25.384 { 00:29:25.384 "uuid": "9108c554-3c9b-4c8b-9c13-a56f5926bd44", 00:29:25.384 "method": "bdev_lvol_get_lvstores", 00:29:25.384 "req_id": 1 00:29:25.384 } 00:29:25.384 Got JSON-RPC error response 00:29:25.384 response: 00:29:25.384 { 00:29:25.384 "code": -19, 00:29:25.384 "message": "No such device" 00:29:25.384 } 00:29:25.384 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:29:25.384 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:25.384 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:25.384 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:25.384 13:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:25.384 aio_bdev 00:29:25.643 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 86a0ee39-600d-42b7-9282-c302d331de08 00:29:25.643 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=86a0ee39-600d-42b7-9282-c302d331de08 00:29:25.643 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:29:25.643 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:29:25.643 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:29:25.643 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:29:25.643 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:25.643 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 86a0ee39-600d-42b7-9282-c302d331de08 -t 2000 00:29:25.902 [ 00:29:25.902 { 00:29:25.902 "name": "86a0ee39-600d-42b7-9282-c302d331de08", 00:29:25.903 "aliases": [ 00:29:25.903 "lvs/lvol" 00:29:25.903 ], 00:29:25.903 "product_name": "Logical Volume", 00:29:25.903 "block_size": 4096, 00:29:25.903 "num_blocks": 38912, 00:29:25.903 "uuid": "86a0ee39-600d-42b7-9282-c302d331de08", 00:29:25.903 "assigned_rate_limits": { 00:29:25.903 "rw_ios_per_sec": 0, 00:29:25.903 "rw_mbytes_per_sec": 0, 00:29:25.903 "r_mbytes_per_sec": 0, 00:29:25.903 "w_mbytes_per_sec": 0 00:29:25.903 }, 00:29:25.903 "claimed": false, 00:29:25.903 "zoned": false, 00:29:25.903 "supported_io_types": { 00:29:25.903 "read": true, 00:29:25.903 "write": true, 00:29:25.903 "unmap": true, 00:29:25.903 "flush": false, 00:29:25.903 "reset": true, 00:29:25.903 "nvme_admin": false, 00:29:25.903 "nvme_io": false, 00:29:25.903 "nvme_io_md": false, 00:29:25.903 "write_zeroes": true, 00:29:25.903 "zcopy": false, 00:29:25.903 "get_zone_info": false, 00:29:25.903 "zone_management": false, 00:29:25.903 "zone_append": false, 00:29:25.903 "compare": false, 00:29:25.903 "compare_and_write": false, 00:29:25.903 "abort": false, 00:29:25.903 "seek_hole": true, 00:29:25.903 "seek_data": true, 00:29:25.903 "copy": false, 00:29:25.903 "nvme_iov_md": false 00:29:25.903 }, 00:29:25.903 "driver_specific": { 00:29:25.903 "lvol": { 00:29:25.903 "lvol_store_uuid": "9108c554-3c9b-4c8b-9c13-a56f5926bd44", 00:29:25.903 "base_bdev": "aio_bdev", 00:29:25.903 "thin_provision": false, 00:29:25.903 "num_allocated_clusters": 38, 00:29:25.903 "snapshot": false, 00:29:25.903 "clone": false, 00:29:25.903 "esnap_clone": false 00:29:25.903 } 00:29:25.903 } 00:29:25.903 } 00:29:25.903 ] 00:29:25.903 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:29:25.903 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9108c554-3c9b-4c8b-9c13-a56f5926bd44 00:29:25.903 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:26.162 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:26.162 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9108c554-3c9b-4c8b-9c13-a56f5926bd44 00:29:26.162 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:26.422 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:26.422 13:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 86a0ee39-600d-42b7-9282-c302d331de08 00:29:26.422 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9108c554-3c9b-4c8b-9c13-a56f5926bd44 00:29:26.681 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:26.941 00:29:26.941 real 0m15.809s 00:29:26.941 user 0m15.283s 00:29:26.941 sys 0m1.524s 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:26.941 ************************************ 00:29:26.941 END TEST lvs_grow_clean 00:29:26.941 ************************************ 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:26.941 ************************************ 00:29:26.941 START TEST lvs_grow_dirty 00:29:26.941 ************************************ 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:26.941 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:27.200 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:27.200 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:27.459 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=255adbd3-ba9e-41fb-94f6-ca29140466a2 00:29:27.459 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255adbd3-ba9e-41fb-94f6-ca29140466a2 00:29:27.459 13:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:27.719 13:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:27.719 13:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:27.719 13:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 255adbd3-ba9e-41fb-94f6-ca29140466a2 lvol 150 00:29:27.719 13:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5be9fcb4-c28b-415b-90a9-8453b2ba4c5c 00:29:27.719 13:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:27.719 13:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:27.978 [2024-11-18 13:12:25.572713] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:27.978 [2024-11-18 13:12:25.572849] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:27.978 true 00:29:27.978 13:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255adbd3-ba9e-41fb-94f6-ca29140466a2 00:29:27.978 13:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:28.237 13:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:28.237 13:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:28.496 13:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5be9fcb4-c28b-415b-90a9-8453b2ba4c5c 00:29:28.496 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:28.756 [2024-11-18 13:12:26.345149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.756 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:29.015 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:29.015 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2520665 00:29:29.015 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:29.015 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2520665 /var/tmp/bdevperf.sock 00:29:29.015 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2520665 ']' 00:29:29.015 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:29.015 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:29.015 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:29.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:29.015 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:29.015 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:29.015 [2024-11-18 13:12:26.576540] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:29:29.015 [2024-11-18 13:12:26.576585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520665 ] 00:29:29.015 [2024-11-18 13:12:26.649150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.015 [2024-11-18 13:12:26.692979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.274 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:29.274 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:29:29.274 13:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:29.534 Nvme0n1 00:29:29.534 13:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:29.534 [ 00:29:29.534 { 00:29:29.534 "name": "Nvme0n1", 00:29:29.534 "aliases": [ 00:29:29.534 "5be9fcb4-c28b-415b-90a9-8453b2ba4c5c" 00:29:29.534 ], 00:29:29.534 "product_name": "NVMe disk", 00:29:29.534 "block_size": 4096, 00:29:29.534 "num_blocks": 38912, 00:29:29.534 "uuid": "5be9fcb4-c28b-415b-90a9-8453b2ba4c5c", 00:29:29.534 "numa_id": 1, 00:29:29.534 "assigned_rate_limits": { 00:29:29.534 "rw_ios_per_sec": 0, 00:29:29.534 "rw_mbytes_per_sec": 0, 00:29:29.534 "r_mbytes_per_sec": 0, 00:29:29.534 "w_mbytes_per_sec": 0 00:29:29.534 }, 00:29:29.534 "claimed": false, 00:29:29.534 "zoned": false, 00:29:29.534 "supported_io_types": { 00:29:29.534 "read": true, 00:29:29.534 "write": true, 00:29:29.534 "unmap": true, 00:29:29.534 "flush": true, 00:29:29.534 "reset": true, 00:29:29.534 "nvme_admin": true, 00:29:29.534 "nvme_io": true, 00:29:29.534 "nvme_io_md": false, 00:29:29.534 "write_zeroes": true, 00:29:29.534 "zcopy": false, 00:29:29.534 "get_zone_info": false, 00:29:29.534 "zone_management": false, 00:29:29.534 "zone_append": false, 00:29:29.534 "compare": true, 00:29:29.534 "compare_and_write": true, 00:29:29.534 "abort": true, 00:29:29.534 "seek_hole": false, 00:29:29.534 "seek_data": false, 00:29:29.534 "copy": true, 00:29:29.534 "nvme_iov_md": false 00:29:29.534 }, 00:29:29.534 "memory_domains": [ 00:29:29.534 { 00:29:29.534 "dma_device_id": "system", 00:29:29.534 "dma_device_type": 1 00:29:29.534 } 00:29:29.534 ], 00:29:29.534 "driver_specific": { 00:29:29.534 "nvme": [ 00:29:29.534 { 00:29:29.534 "trid": { 00:29:29.534 "trtype": "TCP", 00:29:29.534 "adrfam": "IPv4", 00:29:29.534 "traddr": "10.0.0.2", 00:29:29.534 "trsvcid": "4420", 00:29:29.534 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:29.534 }, 00:29:29.534 "ctrlr_data": { 00:29:29.534 "cntlid": 1, 00:29:29.534 "vendor_id": "0x8086", 00:29:29.534 "model_number": "SPDK bdev Controller", 00:29:29.534 "serial_number": "SPDK0", 00:29:29.534 "firmware_revision": "25.01", 00:29:29.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.534 "oacs": { 00:29:29.534 "security": 0, 00:29:29.534 "format": 0, 00:29:29.534 "firmware": 0, 00:29:29.534 "ns_manage": 0 00:29:29.534 }, 00:29:29.534 "multi_ctrlr": true, 00:29:29.534 "ana_reporting": false 00:29:29.534 }, 00:29:29.534 "vs": { 00:29:29.534 "nvme_version": "1.3" 00:29:29.534 }, 00:29:29.534 "ns_data": { 00:29:29.534 "id": 1, 00:29:29.534 "can_share": true 00:29:29.534 } 00:29:29.534 } 00:29:29.534 ], 00:29:29.534 "mp_policy": "active_passive" 00:29:29.534 } 00:29:29.534 } 00:29:29.534 ] 00:29:29.794 13:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2520684 00:29:29.794 13:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:29.794 13:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:29.794 Running I/O for 10 seconds... 00:29:30.731 Latency(us) 00:29:30.731 [2024-11-18T12:12:28.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.731 Nvme0n1 : 1.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:30.731 [2024-11-18T12:12:28.433Z] =================================================================================================================== 00:29:30.731 [2024-11-18T12:12:28.433Z] Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:30.731 00:29:31.666 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 255adbd3-ba9e-41fb-94f6-ca29140466a2 00:29:31.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.666 Nvme0n1 : 2.00 22542.50 88.06 0.00 0.00 0.00 0.00 0.00 00:29:31.666 [2024-11-18T12:12:29.368Z] =================================================================================================================== 00:29:31.666 [2024-11-18T12:12:29.368Z] Total : 22542.50 88.06 0.00 0.00 0.00 0.00 0.00 00:29:31.666 00:29:31.924 true 00:29:31.924 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255adbd3-ba9e-41fb-94f6-ca29140466a2 00:29:31.924 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:32.182 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:32.182 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:32.182 13:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2520684 00:29:32.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.751 Nvme0n1 : 3.00 22563.67 88.14 0.00 0.00 0.00 0.00 0.00 00:29:32.751 [2024-11-18T12:12:30.453Z] =================================================================================================================== 00:29:32.751 [2024-11-18T12:12:30.453Z] Total : 22563.67 88.14 0.00 0.00 0.00 0.00 0.00 00:29:32.751 00:29:33.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.687 Nvme0n1 : 4.00 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:29:33.687 [2024-11-18T12:12:31.389Z] =================================================================================================================== 00:29:33.687 [2024-11-18T12:12:31.389Z] Total : 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:29:33.687 00:29:35.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.065 Nvme0n1 : 5.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:29:35.065 [2024-11-18T12:12:32.767Z] =================================================================================================================== 00:29:35.065 [2024-11-18T12:12:32.767Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:29:35.065 00:29:36.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.001 Nvme0n1 : 6.00 22786.00 89.01 0.00 0.00 0.00 0.00 0.00 00:29:36.001 [2024-11-18T12:12:33.703Z] =================================================================================================================== 00:29:36.001 [2024-11-18T12:12:33.703Z] Total : 22786.00 89.01 0.00 0.00 0.00 0.00 0.00 00:29:36.001 00:29:36.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.938 Nvme0n1 : 7.00 22815.00 89.12 0.00 0.00 0.00 0.00 0.00 00:29:36.938 [2024-11-18T12:12:34.640Z] =================================================================================================================== 00:29:36.938 [2024-11-18T12:12:34.640Z] Total : 22815.00 89.12 0.00 0.00 0.00 0.00 0.00 00:29:36.938 00:29:37.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.876 Nvme0n1 : 8.00 22852.38 89.27 0.00 0.00 0.00 0.00 0.00 00:29:37.876 [2024-11-18T12:12:35.578Z] =================================================================================================================== 00:29:37.876 [2024-11-18T12:12:35.578Z] Total : 22852.38 89.27 0.00 0.00 0.00 0.00 0.00 00:29:37.876 00:29:38.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.813 Nvme0n1 : 9.00 22867.33 89.33 0.00 0.00 0.00 0.00 0.00 00:29:38.813 [2024-11-18T12:12:36.515Z] =================================================================================================================== 00:29:38.813 [2024-11-18T12:12:36.515Z] Total : 22867.33 89.33 0.00 0.00 0.00 0.00 0.00 00:29:38.813 00:29:39.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.750 Nvme0n1 : 10.00 22885.70 89.40 0.00 0.00 0.00 0.00 0.00 00:29:39.750 [2024-11-18T12:12:37.452Z] =================================================================================================================== 00:29:39.750 [2024-11-18T12:12:37.452Z] Total : 22885.70 89.40 0.00 0.00 0.00 0.00 0.00 00:29:39.750 00:29:39.750 00:29:39.750 Latency(us) 00:29:39.750 [2024-11-18T12:12:37.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.750 Nvme0n1 : 10.00 22889.45 89.41 0.00 0.00 5588.82 2649.93 25074.64 00:29:39.750 [2024-11-18T12:12:37.452Z] =================================================================================================================== 00:29:39.750 [2024-11-18T12:12:37.452Z] Total : 22889.45 89.41 0.00 0.00 5588.82 2649.93 25074.64 00:29:39.750 { 00:29:39.750 "results": [ 00:29:39.750 { 00:29:39.750 "job": "Nvme0n1", 00:29:39.750 "core_mask": "0x2", 00:29:39.750 "workload": "randwrite", 00:29:39.750 "status": "finished", 00:29:39.750 "queue_depth": 128, 00:29:39.750 "io_size": 4096, 00:29:39.750 "runtime": 10.003954, 00:29:39.750 "iops": 22889.44951166309, 00:29:39.750 "mibps": 89.41191215493394, 00:29:39.750 "io_failed": 0, 00:29:39.750 "io_timeout": 0, 00:29:39.750 "avg_latency_us": 5588.824425917399, 00:29:39.750 "min_latency_us": 2649.9339130434782, 00:29:39.750 "max_latency_us": 25074.64347826087 00:29:39.750 } 00:29:39.750 ], 00:29:39.750 "core_count": 1 00:29:39.750 } 00:29:39.750 13:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2520665 00:29:39.750 13:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2520665 ']' 00:29:39.750 13:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2520665 00:29:39.750 13:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:29:39.750 13:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:39.750 13:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2520665 00:29:39.750 13:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:39.750 13:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:39.750 13:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2520665' 00:29:39.750 killing process with pid 2520665 00:29:39.750 13:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2520665 00:29:39.750 Received shutdown signal, test time was about 10.000000 seconds 00:29:39.750 00:29:39.750 Latency(us) 00:29:39.750 [2024-11-18T12:12:37.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.750 [2024-11-18T12:12:37.452Z] =================================================================================================================== 00:29:39.750 [2024-11-18T12:12:37.452Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:39.750 13:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2520665 00:29:40.009 13:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:40.268 13:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:40.528 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255adbd3-ba9e-41fb-94f6-ca29140466a2 00:29:40.528 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:40.528 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:40.528 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:40.528 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2517584 00:29:40.528 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2517584 00:29:40.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2517584 Killed "${NVMF_APP[@]}" "$@" 00:29:40.787 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:40.787 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:40.787 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:40.787 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.787 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:40.787 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2522526 00:29:40.787 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2522526 00:29:40.787 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:40.787 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2522526 ']' 00:29:40.787 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.787 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:40.787 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.787 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:40.787 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:40.787 [2024-11-18 13:12:38.301578] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:40.787 [2024-11-18 13:12:38.302486] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:29:40.787 [2024-11-18 13:12:38.302520] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.787 [2024-11-18 13:12:38.380202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.787 [2024-11-18 13:12:38.421137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.787 [2024-11-18 13:12:38.421173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.787 [2024-11-18 13:12:38.421180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.787 [2024-11-18 13:12:38.421186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.787 [2024-11-18 13:12:38.421191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.787 [2024-11-18 13:12:38.421748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.047 [2024-11-18 13:12:38.488035] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:41.047 [2024-11-18 13:12:38.488266] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:41.047 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:41.047 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:29:41.047 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:41.047 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:41.047 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:41.047 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.047 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:41.047 [2024-11-18 13:12:38.727133] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:41.047 [2024-11-18 13:12:38.727325] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:41.047 [2024-11-18 13:12:38.727426] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:41.306 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:41.306 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5be9fcb4-c28b-415b-90a9-8453b2ba4c5c 00:29:41.306 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=5be9fcb4-c28b-415b-90a9-8453b2ba4c5c 00:29:41.307 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:29:41.307 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:29:41.307 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:29:41.307 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:29:41.307 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:41.307 13:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5be9fcb4-c28b-415b-90a9-8453b2ba4c5c -t 2000 00:29:41.566 [ 00:29:41.566 { 00:29:41.566 "name": "5be9fcb4-c28b-415b-90a9-8453b2ba4c5c", 00:29:41.566 "aliases": [ 00:29:41.566 "lvs/lvol" 00:29:41.566 ], 00:29:41.566 "product_name": "Logical Volume", 00:29:41.566 "block_size": 4096, 00:29:41.566 "num_blocks": 38912, 00:29:41.566 "uuid": "5be9fcb4-c28b-415b-90a9-8453b2ba4c5c", 00:29:41.566 "assigned_rate_limits": { 00:29:41.566 "rw_ios_per_sec": 0, 00:29:41.566 "rw_mbytes_per_sec": 0, 00:29:41.566 "r_mbytes_per_sec": 0, 00:29:41.566 "w_mbytes_per_sec": 0 00:29:41.566 }, 00:29:41.566 "claimed": false, 00:29:41.566 "zoned": false, 00:29:41.566 "supported_io_types": { 00:29:41.566 "read": true, 00:29:41.566 "write": true, 00:29:41.566 "unmap": true, 00:29:41.566 "flush": false, 00:29:41.566 "reset": true, 00:29:41.566 "nvme_admin": false, 00:29:41.566 "nvme_io": false, 00:29:41.566 "nvme_io_md": false, 00:29:41.566 "write_zeroes": true, 00:29:41.566 "zcopy": false, 00:29:41.566 "get_zone_info": false, 00:29:41.566 "zone_management": false, 00:29:41.566 "zone_append": false, 00:29:41.566 "compare": false, 00:29:41.566 "compare_and_write": false, 00:29:41.566 "abort": false, 00:29:41.566 "seek_hole": true, 00:29:41.566 "seek_data": true, 00:29:41.566 "copy": false, 00:29:41.566 "nvme_iov_md": false 00:29:41.566 }, 00:29:41.566 "driver_specific": { 00:29:41.566 "lvol": { 00:29:41.566 "lvol_store_uuid": "255adbd3-ba9e-41fb-94f6-ca29140466a2", 00:29:41.566 "base_bdev": "aio_bdev", 00:29:41.566 "thin_provision": false, 00:29:41.566 "num_allocated_clusters": 38, 00:29:41.566 "snapshot": false, 00:29:41.566 "clone": false, 00:29:41.566 "esnap_clone": false 00:29:41.566 } 00:29:41.566 } 00:29:41.566 } 00:29:41.566 ] 00:29:41.566 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:29:41.566 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255adbd3-ba9e-41fb-94f6-ca29140466a2 00:29:41.566 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:41.825 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:41.825 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255adbd3-ba9e-41fb-94f6-ca29140466a2 00:29:41.825 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:41.825 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:41.825 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:42.084 [2024-11-18 13:12:39.686193] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:42.084 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255adbd3-ba9e-41fb-94f6-ca29140466a2 00:29:42.084 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:29:42.084 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255adbd3-ba9e-41fb-94f6-ca29140466a2 00:29:42.084 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:42.084 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:42.084 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:42.084 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:42.084 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:42.084 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:42.084 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:42.084 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:42.084 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255adbd3-ba9e-41fb-94f6-ca29140466a2 00:29:42.344 request: 00:29:42.344 { 00:29:42.344 "uuid": "255adbd3-ba9e-41fb-94f6-ca29140466a2", 00:29:42.344 "method": "bdev_lvol_get_lvstores", 00:29:42.344 "req_id": 1 00:29:42.344 } 00:29:42.344 Got JSON-RPC error response 00:29:42.344 response: 00:29:42.344 { 00:29:42.344 "code": -19, 00:29:42.344 "message": "No such device" 00:29:42.344 } 00:29:42.344 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:29:42.344 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:42.344 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:42.344 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:42.344 13:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:42.603 aio_bdev 00:29:42.603 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5be9fcb4-c28b-415b-90a9-8453b2ba4c5c 00:29:42.603 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=5be9fcb4-c28b-415b-90a9-8453b2ba4c5c 00:29:42.603 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:29:42.603 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:29:42.603 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:29:42.604 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:29:42.604 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:42.863 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5be9fcb4-c28b-415b-90a9-8453b2ba4c5c -t 2000 00:29:42.863 [ 00:29:42.863 { 00:29:42.863 "name": "5be9fcb4-c28b-415b-90a9-8453b2ba4c5c", 00:29:42.863 "aliases": [ 00:29:42.863 "lvs/lvol" 00:29:42.863 ], 00:29:42.863 "product_name": "Logical Volume", 00:29:42.863 "block_size": 4096, 00:29:42.863 "num_blocks": 38912, 00:29:42.863 "uuid": "5be9fcb4-c28b-415b-90a9-8453b2ba4c5c", 00:29:42.863 "assigned_rate_limits": { 00:29:42.863 "rw_ios_per_sec": 0, 00:29:42.863 "rw_mbytes_per_sec": 0, 00:29:42.863 "r_mbytes_per_sec": 0, 00:29:42.863 "w_mbytes_per_sec": 0 00:29:42.863 }, 00:29:42.863 "claimed": false, 00:29:42.863 "zoned": false, 00:29:42.863 "supported_io_types": { 00:29:42.863 "read": true, 00:29:42.863 "write": true, 00:29:42.863 "unmap": true, 00:29:42.863 "flush": false, 00:29:42.863 "reset": true, 00:29:42.863 "nvme_admin": false, 00:29:42.863 "nvme_io": false, 00:29:42.863 "nvme_io_md": false, 00:29:42.863 "write_zeroes": true, 00:29:42.863 "zcopy": false, 00:29:42.863 "get_zone_info": false, 00:29:42.863 "zone_management": false, 00:29:42.863 "zone_append": false, 00:29:42.863 "compare": false, 00:29:42.863 "compare_and_write": false, 00:29:42.863 "abort": false, 00:29:42.863 "seek_hole": true, 00:29:42.863 "seek_data": true, 00:29:42.863 "copy": false, 00:29:42.863 "nvme_iov_md": false 00:29:42.863 }, 00:29:42.863 "driver_specific": { 00:29:42.863 "lvol": { 00:29:42.863 "lvol_store_uuid": "255adbd3-ba9e-41fb-94f6-ca29140466a2", 00:29:42.863 "base_bdev": "aio_bdev", 00:29:42.863 "thin_provision": false, 00:29:42.863 "num_allocated_clusters": 38, 00:29:42.863 "snapshot": false, 00:29:42.863 "clone": false, 00:29:42.863 "esnap_clone": false 00:29:42.863 } 00:29:42.863 } 00:29:42.863 } 00:29:42.863 ] 00:29:42.863 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:29:42.863 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255adbd3-ba9e-41fb-94f6-ca29140466a2 00:29:42.863 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:43.122 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:43.122 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255adbd3-ba9e-41fb-94f6-ca29140466a2 00:29:43.122 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:43.381 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:43.381 13:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5be9fcb4-c28b-415b-90a9-8453b2ba4c5c 00:29:43.642 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 255adbd3-ba9e-41fb-94f6-ca29140466a2 00:29:43.903 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:43.903 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:43.903 00:29:43.903 real 0m17.015s 00:29:43.903 user 0m34.543s 00:29:43.903 sys 0m3.715s 00:29:43.903 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:43.903 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:43.903 ************************************ 00:29:43.903 END TEST lvs_grow_dirty 00:29:43.903 ************************************ 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:44.163 nvmf_trace.0 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:44.163 rmmod nvme_tcp 00:29:44.163 rmmod nvme_fabrics 00:29:44.163 rmmod nvme_keyring 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2522526 ']' 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2522526 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2522526 ']' 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2522526 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2522526 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2522526' 00:29:44.163 killing process with pid 2522526 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2522526 00:29:44.163 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2522526 00:29:44.423 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:44.423 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:44.423 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:44.423 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:44.423 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:44.423 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:44.423 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:44.423 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:44.423 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:44.423 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.423 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.423 13:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.328 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:46.328 00:29:46.328 real 0m42.060s 00:29:46.328 user 0m52.279s 00:29:46.328 sys 0m10.229s 00:29:46.328 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:46.328 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:46.328 ************************************ 00:29:46.328 END TEST nvmf_lvs_grow 00:29:46.328 ************************************ 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:46.588 ************************************ 00:29:46.588 START TEST nvmf_bdev_io_wait 00:29:46.588 ************************************ 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:46.588 * Looking for test storage... 00:29:46.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:46.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.588 --rc genhtml_branch_coverage=1 00:29:46.588 --rc genhtml_function_coverage=1 00:29:46.588 --rc genhtml_legend=1 00:29:46.588 --rc geninfo_all_blocks=1 00:29:46.588 --rc geninfo_unexecuted_blocks=1 00:29:46.588 00:29:46.588 ' 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:46.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.588 --rc genhtml_branch_coverage=1 00:29:46.588 --rc genhtml_function_coverage=1 00:29:46.588 --rc genhtml_legend=1 00:29:46.588 --rc geninfo_all_blocks=1 00:29:46.588 --rc geninfo_unexecuted_blocks=1 00:29:46.588 00:29:46.588 ' 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:46.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.588 --rc genhtml_branch_coverage=1 00:29:46.588 --rc genhtml_function_coverage=1 00:29:46.588 --rc genhtml_legend=1 00:29:46.588 --rc geninfo_all_blocks=1 00:29:46.588 --rc geninfo_unexecuted_blocks=1 00:29:46.588 00:29:46.588 ' 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:46.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.588 --rc genhtml_branch_coverage=1 00:29:46.588 --rc genhtml_function_coverage=1 00:29:46.588 --rc genhtml_legend=1 00:29:46.588 --rc geninfo_all_blocks=1 00:29:46.588 --rc geninfo_unexecuted_blocks=1 00:29:46.588 00:29:46.588 ' 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.588 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.589 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.589 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.589 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.589 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.589 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:46.848 13:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.423 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:53.424 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:53.424 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:53.424 Found net devices under 0000:86:00.0: cvl_0_0 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:53.424 Found net devices under 0000:86:00.1: cvl_0_1 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.424 13:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:53.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:29:53.424 00:29:53.424 --- 10.0.0.2 ping statistics --- 00:29:53.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.424 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:29:53.424 00:29:53.424 --- 10.0.0.1 ping statistics --- 00:29:53.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.424 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:53.424 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2526573 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2526573 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2526573 ']' 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.425 [2024-11-18 13:12:50.268626] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:53.425 [2024-11-18 13:12:50.269587] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:29:53.425 [2024-11-18 13:12:50.269621] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.425 [2024-11-18 13:12:50.347437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:53.425 [2024-11-18 13:12:50.392264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.425 [2024-11-18 13:12:50.392302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.425 [2024-11-18 13:12:50.392309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.425 [2024-11-18 13:12:50.392315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.425 [2024-11-18 13:12:50.392320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.425 [2024-11-18 13:12:50.393766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.425 [2024-11-18 13:12:50.393873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.425 [2024-11-18 13:12:50.393979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.425 [2024-11-18 13:12:50.393980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:53.425 [2024-11-18 13:12:50.394240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.425 [2024-11-18 13:12:50.526613] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:53.425 [2024-11-18 13:12:50.527218] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:53.425 [2024-11-18 13:12:50.527567] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:53.425 [2024-11-18 13:12:50.527683] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.425 [2024-11-18 13:12:50.538615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.425 Malloc0 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.425 [2024-11-18 13:12:50.610892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2526604 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2526608 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:53.425 { 00:29:53.425 "params": { 00:29:53.425 "name": "Nvme$subsystem", 00:29:53.425 "trtype": "$TEST_TRANSPORT", 00:29:53.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.425 "adrfam": "ipv4", 00:29:53.425 "trsvcid": "$NVMF_PORT", 00:29:53.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.425 "hdgst": ${hdgst:-false}, 00:29:53.425 "ddgst": ${ddgst:-false} 00:29:53.425 }, 00:29:53.425 "method": "bdev_nvme_attach_controller" 00:29:53.425 } 00:29:53.425 EOF 00:29:53.425 )") 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2526610 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:53.425 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:53.425 { 00:29:53.425 "params": { 00:29:53.425 "name": "Nvme$subsystem", 00:29:53.425 "trtype": "$TEST_TRANSPORT", 00:29:53.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.425 "adrfam": "ipv4", 00:29:53.425 "trsvcid": "$NVMF_PORT", 00:29:53.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.425 "hdgst": ${hdgst:-false}, 00:29:53.426 "ddgst": ${ddgst:-false} 00:29:53.426 }, 00:29:53.426 "method": "bdev_nvme_attach_controller" 00:29:53.426 } 00:29:53.426 EOF 00:29:53.426 )") 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2526613 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:53.426 { 00:29:53.426 "params": { 00:29:53.426 "name": "Nvme$subsystem", 00:29:53.426 "trtype": "$TEST_TRANSPORT", 00:29:53.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.426 "adrfam": "ipv4", 00:29:53.426 "trsvcid": "$NVMF_PORT", 00:29:53.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.426 "hdgst": ${hdgst:-false}, 00:29:53.426 "ddgst": ${ddgst:-false} 00:29:53.426 }, 00:29:53.426 "method": "bdev_nvme_attach_controller" 00:29:53.426 } 00:29:53.426 EOF 00:29:53.426 )") 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:53.426 { 00:29:53.426 "params": { 00:29:53.426 "name": "Nvme$subsystem", 00:29:53.426 "trtype": "$TEST_TRANSPORT", 00:29:53.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.426 "adrfam": "ipv4", 00:29:53.426 "trsvcid": "$NVMF_PORT", 00:29:53.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.426 "hdgst": ${hdgst:-false}, 00:29:53.426 "ddgst": ${ddgst:-false} 00:29:53.426 }, 00:29:53.426 "method": "bdev_nvme_attach_controller" 00:29:53.426 } 00:29:53.426 EOF 00:29:53.426 )") 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2526604 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:53.426 "params": { 00:29:53.426 "name": "Nvme1", 00:29:53.426 "trtype": "tcp", 00:29:53.426 "traddr": "10.0.0.2", 00:29:53.426 "adrfam": "ipv4", 00:29:53.426 "trsvcid": "4420", 00:29:53.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:53.426 "hdgst": false, 00:29:53.426 "ddgst": false 00:29:53.426 }, 00:29:53.426 "method": "bdev_nvme_attach_controller" 00:29:53.426 }' 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:53.426 "params": { 00:29:53.426 "name": "Nvme1", 00:29:53.426 "trtype": "tcp", 00:29:53.426 "traddr": "10.0.0.2", 00:29:53.426 "adrfam": "ipv4", 00:29:53.426 "trsvcid": "4420", 00:29:53.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:53.426 "hdgst": false, 00:29:53.426 "ddgst": false 00:29:53.426 }, 00:29:53.426 "method": "bdev_nvme_attach_controller" 00:29:53.426 }' 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:53.426 "params": { 00:29:53.426 "name": "Nvme1", 00:29:53.426 "trtype": "tcp", 00:29:53.426 "traddr": "10.0.0.2", 00:29:53.426 "adrfam": "ipv4", 00:29:53.426 "trsvcid": "4420", 00:29:53.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:53.426 "hdgst": false, 00:29:53.426 "ddgst": false 00:29:53.426 }, 00:29:53.426 "method": "bdev_nvme_attach_controller" 00:29:53.426 }' 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:53.426 13:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:53.426 "params": { 00:29:53.426 "name": "Nvme1", 00:29:53.426 "trtype": "tcp", 00:29:53.426 "traddr": "10.0.0.2", 00:29:53.426 "adrfam": "ipv4", 00:29:53.426 "trsvcid": "4420", 00:29:53.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:53.426 "hdgst": false, 00:29:53.426 "ddgst": false 00:29:53.426 }, 00:29:53.426 "method": "bdev_nvme_attach_controller" 00:29:53.426 }' 00:29:53.426 [2024-11-18 13:12:50.663227] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:29:53.426 [2024-11-18 13:12:50.663228] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:29:53.426 [2024-11-18 13:12:50.663279] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 13:12:50.663280] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:53.426 --proc-type=auto ] 00:29:53.426 [2024-11-18 13:12:50.666753] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:29:53.426 [2024-11-18 13:12:50.666761] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:29:53.426 [2024-11-18 13:12:50.666797] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:53.426 [2024-11-18 13:12:50.666801] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:53.426 [2024-11-18 13:12:50.861443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.426 [2024-11-18 13:12:50.904616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:53.426 [2024-11-18 13:12:50.954431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.426 [2024-11-18 13:12:50.994336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.426 [2024-11-18 13:12:51.004156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:53.426 [2024-11-18 13:12:51.037438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:53.426 [2024-11-18 13:12:51.071700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.426 [2024-11-18 13:12:51.114481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:53.686 Running I/O for 1 seconds... 00:29:53.686 Running I/O for 1 seconds... 00:29:53.686 Running I/O for 1 seconds... 00:29:53.945 Running I/O for 1 seconds... 00:29:54.512 11729.00 IOPS, 45.82 MiB/s 00:29:54.512 Latency(us) 00:29:54.512 [2024-11-18T12:12:52.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.512 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:54.512 Nvme1n1 : 1.01 11795.00 46.07 0.00 0.00 10818.39 1495.93 12423.35 00:29:54.512 [2024-11-18T12:12:52.214Z] =================================================================================================================== 00:29:54.512 [2024-11-18T12:12:52.214Z] Total : 11795.00 46.07 0.00 0.00 10818.39 1495.93 12423.35 00:29:54.772 11225.00 IOPS, 43.85 MiB/s 00:29:54.772 Latency(us) 00:29:54.772 [2024-11-18T12:12:52.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.772 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:54.772 Nvme1n1 : 1.01 11303.41 44.15 0.00 0.00 11292.47 4017.64 14531.90 00:29:54.772 [2024-11-18T12:12:52.474Z] =================================================================================================================== 00:29:54.772 [2024-11-18T12:12:52.474Z] Total : 11303.41 44.15 0.00 0.00 11292.47 4017.64 14531.90 00:29:54.772 10967.00 IOPS, 42.84 MiB/s [2024-11-18T12:12:52.474Z] 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2526608 00:29:54.772 00:29:54.772 Latency(us) 00:29:54.772 [2024-11-18T12:12:52.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.772 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:54.772 Nvme1n1 : 1.01 11041.06 43.13 0.00 0.00 11562.29 3960.65 16982.37 00:29:54.772 [2024-11-18T12:12:52.474Z] =================================================================================================================== 00:29:54.772 [2024-11-18T12:12:52.474Z] Total : 11041.06 43.13 0.00 0.00 11562.29 3960.65 16982.37 00:29:54.772 246720.00 IOPS, 963.75 MiB/s 00:29:54.772 Latency(us) 00:29:54.772 [2024-11-18T12:12:52.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.772 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:54.772 Nvme1n1 : 1.00 246340.26 962.27 0.00 0.00 517.21 229.73 1538.67 00:29:54.772 [2024-11-18T12:12:52.474Z] =================================================================================================================== 00:29:54.772 [2024-11-18T12:12:52.474Z] Total : 246340.26 962.27 0.00 0.00 517.21 229.73 1538.67 00:29:54.772 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2526610 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2526613 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:55.032 rmmod nvme_tcp 00:29:55.032 rmmod nvme_fabrics 00:29:55.032 rmmod nvme_keyring 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2526573 ']' 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2526573 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2526573 ']' 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2526573 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2526573 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2526573' 00:29:55.032 killing process with pid 2526573 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2526573 00:29:55.032 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2526573 00:29:55.292 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:55.292 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:55.292 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:55.292 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:55.292 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:55.292 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:55.292 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:55.292 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:55.292 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:55.292 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.292 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.292 13:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.200 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:57.200 00:29:57.200 real 0m10.797s 00:29:57.200 user 0m15.203s 00:29:57.200 sys 0m6.514s 00:29:57.200 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:57.200 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:57.200 ************************************ 00:29:57.200 END TEST nvmf_bdev_io_wait 00:29:57.200 ************************************ 00:29:57.461 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:57.461 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:57.461 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:57.461 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:57.461 ************************************ 00:29:57.461 START TEST nvmf_queue_depth 00:29:57.461 ************************************ 00:29:57.461 13:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:57.461 * Looking for test storage... 00:29:57.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:57.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.461 --rc genhtml_branch_coverage=1 00:29:57.461 --rc genhtml_function_coverage=1 00:29:57.461 --rc genhtml_legend=1 00:29:57.461 --rc geninfo_all_blocks=1 00:29:57.461 --rc geninfo_unexecuted_blocks=1 00:29:57.461 00:29:57.461 ' 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:57.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.461 --rc genhtml_branch_coverage=1 00:29:57.461 --rc genhtml_function_coverage=1 00:29:57.461 --rc genhtml_legend=1 00:29:57.461 --rc geninfo_all_blocks=1 00:29:57.461 --rc geninfo_unexecuted_blocks=1 00:29:57.461 00:29:57.461 ' 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:57.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.461 --rc genhtml_branch_coverage=1 00:29:57.461 --rc genhtml_function_coverage=1 00:29:57.461 --rc genhtml_legend=1 00:29:57.461 --rc geninfo_all_blocks=1 00:29:57.461 --rc geninfo_unexecuted_blocks=1 00:29:57.461 00:29:57.461 ' 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:57.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.461 --rc genhtml_branch_coverage=1 00:29:57.461 --rc genhtml_function_coverage=1 00:29:57.461 --rc genhtml_legend=1 00:29:57.461 --rc geninfo_all_blocks=1 00:29:57.461 --rc geninfo_unexecuted_blocks=1 00:29:57.461 00:29:57.461 ' 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.461 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:57.721 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:57.722 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:57.722 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:57.722 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.722 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:57.722 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:57.722 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:57.722 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.722 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.722 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.722 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:57.722 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:57.722 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:57.722 13:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:04.295 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:04.295 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:04.295 Found net devices under 0000:86:00.0: cvl_0_0 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.295 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:04.296 Found net devices under 0000:86:00.1: cvl_0_1 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:04.296 13:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:04.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:30:04.296 00:30:04.296 --- 10.0.0.2 ping statistics --- 00:30:04.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.296 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:30:04.296 00:30:04.296 --- 10.0.0.1 ping statistics --- 00:30:04.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.296 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2530389 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2530389 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2530389 ']' 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:04.296 [2024-11-18 13:13:01.161689] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:04.296 [2024-11-18 13:13:01.162710] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:30:04.296 [2024-11-18 13:13:01.162753] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.296 [2024-11-18 13:13:01.247238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.296 [2024-11-18 13:13:01.286822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.296 [2024-11-18 13:13:01.286863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.296 [2024-11-18 13:13:01.286870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.296 [2024-11-18 13:13:01.286876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.296 [2024-11-18 13:13:01.286880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.296 [2024-11-18 13:13:01.287429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.296 [2024-11-18 13:13:01.354394] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:04.296 [2024-11-18 13:13:01.354616] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:04.296 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:04.297 [2024-11-18 13:13:01.432079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:04.297 Malloc0 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:04.297 [2024-11-18 13:13:01.508154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2530600 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2530600 /var/tmp/bdevperf.sock 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2530600 ']' 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:04.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:04.297 [2024-11-18 13:13:01.558266] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:30:04.297 [2024-11-18 13:13:01.558309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2530600 ] 00:30:04.297 [2024-11-18 13:13:01.631497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.297 [2024-11-18 13:13:01.672849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:04.297 NVMe0n1 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.297 13:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:04.297 Running I/O for 10 seconds... 00:30:06.614 11674.00 IOPS, 45.60 MiB/s [2024-11-18T12:13:05.252Z] 11801.50 IOPS, 46.10 MiB/s [2024-11-18T12:13:06.189Z] 11951.67 IOPS, 46.69 MiB/s [2024-11-18T12:13:07.125Z] 12054.50 IOPS, 47.09 MiB/s [2024-11-18T12:13:08.061Z] 12139.40 IOPS, 47.42 MiB/s [2024-11-18T12:13:08.998Z] 12223.50 IOPS, 47.75 MiB/s [2024-11-18T12:13:10.373Z] 12242.29 IOPS, 47.82 MiB/s [2024-11-18T12:13:11.310Z] 12281.12 IOPS, 47.97 MiB/s [2024-11-18T12:13:12.248Z] 12279.56 IOPS, 47.97 MiB/s [2024-11-18T12:13:12.248Z] 12286.60 IOPS, 47.99 MiB/s 00:30:14.546 Latency(us) 00:30:14.546 [2024-11-18T12:13:12.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.546 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:14.546 Verification LBA range: start 0x0 length 0x4000 00:30:14.546 NVMe0n1 : 10.06 12317.84 48.12 0.00 0.00 82867.36 16298.52 54708.31 00:30:14.546 [2024-11-18T12:13:12.248Z] =================================================================================================================== 00:30:14.546 [2024-11-18T12:13:12.248Z] Total : 12317.84 48.12 0.00 0.00 82867.36 16298.52 54708.31 00:30:14.546 { 00:30:14.546 "results": [ 00:30:14.546 { 00:30:14.546 "job": "NVMe0n1", 00:30:14.546 "core_mask": "0x1", 00:30:14.546 "workload": "verify", 00:30:14.546 "status": "finished", 00:30:14.546 "verify_range": { 00:30:14.546 "start": 0, 00:30:14.546 "length": 16384 00:30:14.546 }, 00:30:14.546 "queue_depth": 1024, 00:30:14.546 "io_size": 4096, 00:30:14.546 "runtime": 10.056388, 00:30:14.546 "iops": 12317.842151675135, 00:30:14.546 "mibps": 48.116570904980996, 00:30:14.546 "io_failed": 0, 00:30:14.546 "io_timeout": 0, 00:30:14.546 "avg_latency_us": 82867.36412478558, 00:30:14.546 "min_latency_us": 16298.518260869565, 00:30:14.546 "max_latency_us": 54708.31304347826 00:30:14.546 } 00:30:14.546 ], 00:30:14.546 "core_count": 1 00:30:14.546 } 00:30:14.546 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2530600 00:30:14.546 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2530600 ']' 00:30:14.546 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2530600 00:30:14.546 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:30:14.546 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:14.546 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2530600 00:30:14.546 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:14.546 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:14.546 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2530600' 00:30:14.546 killing process with pid 2530600 00:30:14.546 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2530600 00:30:14.546 Received shutdown signal, test time was about 10.000000 seconds 00:30:14.546 00:30:14.546 Latency(us) 00:30:14.546 [2024-11-18T12:13:12.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.546 [2024-11-18T12:13:12.248Z] =================================================================================================================== 00:30:14.546 [2024-11-18T12:13:12.248Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:14.546 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2530600 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:14.806 rmmod nvme_tcp 00:30:14.806 rmmod nvme_fabrics 00:30:14.806 rmmod nvme_keyring 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2530389 ']' 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2530389 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2530389 ']' 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2530389 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2530389 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2530389' 00:30:14.806 killing process with pid 2530389 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2530389 00:30:14.806 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2530389 00:30:15.066 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:15.066 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:15.066 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:15.066 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:15.066 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:15.066 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:15.066 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:15.066 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:15.066 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:15.066 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.066 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.066 13:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.974 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:16.974 00:30:16.974 real 0m19.651s 00:30:16.974 user 0m22.644s 00:30:16.974 sys 0m6.258s 00:30:16.974 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:16.974 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:16.974 ************************************ 00:30:16.974 END TEST nvmf_queue_depth 00:30:16.974 ************************************ 00:30:16.974 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:16.974 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:16.974 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:16.974 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:17.235 ************************************ 00:30:17.235 START TEST nvmf_target_multipath 00:30:17.235 ************************************ 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:17.235 * Looking for test storage... 00:30:17.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:17.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.235 --rc genhtml_branch_coverage=1 00:30:17.235 --rc genhtml_function_coverage=1 00:30:17.235 --rc genhtml_legend=1 00:30:17.235 --rc geninfo_all_blocks=1 00:30:17.235 --rc geninfo_unexecuted_blocks=1 00:30:17.235 00:30:17.235 ' 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:17.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.235 --rc genhtml_branch_coverage=1 00:30:17.235 --rc genhtml_function_coverage=1 00:30:17.235 --rc genhtml_legend=1 00:30:17.235 --rc geninfo_all_blocks=1 00:30:17.235 --rc geninfo_unexecuted_blocks=1 00:30:17.235 00:30:17.235 ' 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:17.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.235 --rc genhtml_branch_coverage=1 00:30:17.235 --rc genhtml_function_coverage=1 00:30:17.235 --rc genhtml_legend=1 00:30:17.235 --rc geninfo_all_blocks=1 00:30:17.235 --rc geninfo_unexecuted_blocks=1 00:30:17.235 00:30:17.235 ' 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:17.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.235 --rc genhtml_branch_coverage=1 00:30:17.235 --rc genhtml_function_coverage=1 00:30:17.235 --rc genhtml_legend=1 00:30:17.235 --rc geninfo_all_blocks=1 00:30:17.235 --rc geninfo_unexecuted_blocks=1 00:30:17.235 00:30:17.235 ' 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.235 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:17.236 13:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:23.801 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:23.802 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:23.802 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:23.802 Found net devices under 0000:86:00.0: cvl_0_0 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:23.802 Found net devices under 0000:86:00.1: cvl_0_1 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:23.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:30:23.802 00:30:23.802 --- 10.0.0.2 ping statistics --- 00:30:23.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.802 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:30:23.802 00:30:23.802 --- 10.0.0.1 ping statistics --- 00:30:23.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.802 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:23.802 only one NIC for nvmf test 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.802 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.802 rmmod nvme_tcp 00:30:23.802 rmmod nvme_fabrics 00:30:23.802 rmmod nvme_keyring 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.803 13:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:25.711 00:30:25.711 real 0m8.300s 00:30:25.711 user 0m1.847s 00:30:25.711 sys 0m4.468s 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:25.711 13:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:25.711 ************************************ 00:30:25.711 END TEST nvmf_target_multipath 00:30:25.711 ************************************ 00:30:25.711 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:25.711 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:25.711 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:25.711 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:25.711 ************************************ 00:30:25.711 START TEST nvmf_zcopy 00:30:25.711 ************************************ 00:30:25.711 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:25.711 * Looking for test storage... 00:30:25.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:25.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.712 --rc genhtml_branch_coverage=1 00:30:25.712 --rc genhtml_function_coverage=1 00:30:25.712 --rc genhtml_legend=1 00:30:25.712 --rc geninfo_all_blocks=1 00:30:25.712 --rc geninfo_unexecuted_blocks=1 00:30:25.712 00:30:25.712 ' 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:25.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.712 --rc genhtml_branch_coverage=1 00:30:25.712 --rc genhtml_function_coverage=1 00:30:25.712 --rc genhtml_legend=1 00:30:25.712 --rc geninfo_all_blocks=1 00:30:25.712 --rc geninfo_unexecuted_blocks=1 00:30:25.712 00:30:25.712 ' 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:25.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.712 --rc genhtml_branch_coverage=1 00:30:25.712 --rc genhtml_function_coverage=1 00:30:25.712 --rc genhtml_legend=1 00:30:25.712 --rc geninfo_all_blocks=1 00:30:25.712 --rc geninfo_unexecuted_blocks=1 00:30:25.712 00:30:25.712 ' 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:25.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.712 --rc genhtml_branch_coverage=1 00:30:25.712 --rc genhtml_function_coverage=1 00:30:25.712 --rc genhtml_legend=1 00:30:25.712 --rc geninfo_all_blocks=1 00:30:25.712 --rc geninfo_unexecuted_blocks=1 00:30:25.712 00:30:25.712 ' 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.712 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:25.713 13:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:32.287 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:32.287 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:32.287 Found net devices under 0000:86:00.0: cvl_0_0 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.287 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:32.288 Found net devices under 0000:86:00.1: cvl_0_1 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.288 13:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:32.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:30:32.288 00:30:32.288 --- 10.0.0.2 ping statistics --- 00:30:32.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.288 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:30:32.288 00:30:32.288 --- 10.0.0.1 ping statistics --- 00:30:32.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.288 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2539209 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2539209 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2539209 ']' 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:32.288 [2024-11-18 13:13:29.274147] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:32.288 [2024-11-18 13:13:29.275076] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:30:32.288 [2024-11-18 13:13:29.275110] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.288 [2024-11-18 13:13:29.355040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.288 [2024-11-18 13:13:29.395950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.288 [2024-11-18 13:13:29.395987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.288 [2024-11-18 13:13:29.395994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.288 [2024-11-18 13:13:29.396000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.288 [2024-11-18 13:13:29.396005] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.288 [2024-11-18 13:13:29.396553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.288 [2024-11-18 13:13:29.463023] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:32.288 [2024-11-18 13:13:29.463243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:32.288 [2024-11-18 13:13:29.533226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:32.288 [2024-11-18 13:13:29.561508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:32.288 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:32.289 malloc0 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:32.289 { 00:30:32.289 "params": { 00:30:32.289 "name": "Nvme$subsystem", 00:30:32.289 "trtype": "$TEST_TRANSPORT", 00:30:32.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.289 "adrfam": "ipv4", 00:30:32.289 "trsvcid": "$NVMF_PORT", 00:30:32.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.289 "hdgst": ${hdgst:-false}, 00:30:32.289 "ddgst": ${ddgst:-false} 00:30:32.289 }, 00:30:32.289 "method": "bdev_nvme_attach_controller" 00:30:32.289 } 00:30:32.289 EOF 00:30:32.289 )") 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:32.289 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:32.289 "params": { 00:30:32.289 "name": "Nvme1", 00:30:32.289 "trtype": "tcp", 00:30:32.289 "traddr": "10.0.0.2", 00:30:32.289 "adrfam": "ipv4", 00:30:32.289 "trsvcid": "4420", 00:30:32.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:32.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:32.289 "hdgst": false, 00:30:32.289 "ddgst": false 00:30:32.289 }, 00:30:32.289 "method": "bdev_nvme_attach_controller" 00:30:32.289 }' 00:30:32.289 [2024-11-18 13:13:29.655288] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:30:32.289 [2024-11-18 13:13:29.655334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539291 ] 00:30:32.289 [2024-11-18 13:13:29.729915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.289 [2024-11-18 13:13:29.771112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.548 Running I/O for 10 seconds... 00:30:34.422 8161.00 IOPS, 63.76 MiB/s [2024-11-18T12:13:33.211Z] 8292.00 IOPS, 64.78 MiB/s [2024-11-18T12:13:34.148Z] 8319.33 IOPS, 64.99 MiB/s [2024-11-18T12:13:35.527Z] 8347.50 IOPS, 65.21 MiB/s [2024-11-18T12:13:36.464Z] 8367.80 IOPS, 65.37 MiB/s [2024-11-18T12:13:37.402Z] 8380.00 IOPS, 65.47 MiB/s [2024-11-18T12:13:38.339Z] 8386.86 IOPS, 65.52 MiB/s [2024-11-18T12:13:39.276Z] 8395.25 IOPS, 65.59 MiB/s [2024-11-18T12:13:40.212Z] 8402.78 IOPS, 65.65 MiB/s [2024-11-18T12:13:40.212Z] 8399.40 IOPS, 65.62 MiB/s 00:30:42.510 Latency(us) 00:30:42.510 [2024-11-18T12:13:40.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.510 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:42.510 Verification LBA range: start 0x0 length 0x1000 00:30:42.510 Nvme1n1 : 10.01 8402.29 65.64 0.00 0.00 15190.42 2293.76 22339.23 00:30:42.510 [2024-11-18T12:13:40.212Z] =================================================================================================================== 00:30:42.510 [2024-11-18T12:13:40.212Z] Total : 8402.29 65.64 0.00 0.00 15190.42 2293.76 22339.23 00:30:42.770 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2540911 00:30:42.770 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:42.770 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:42.770 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:42.770 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:42.770 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:42.770 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:42.770 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:42.770 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:42.770 { 00:30:42.770 "params": { 00:30:42.770 "name": "Nvme$subsystem", 00:30:42.770 "trtype": "$TEST_TRANSPORT", 00:30:42.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.770 "adrfam": "ipv4", 00:30:42.770 "trsvcid": "$NVMF_PORT", 00:30:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.770 "hdgst": ${hdgst:-false}, 00:30:42.770 "ddgst": ${ddgst:-false} 00:30:42.770 }, 00:30:42.770 "method": "bdev_nvme_attach_controller" 00:30:42.770 } 00:30:42.770 EOF 00:30:42.770 )") 00:30:42.770 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:42.770 [2024-11-18 13:13:40.292884] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.770 [2024-11-18 13:13:40.292916] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.770 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:42.770 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:42.770 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:42.770 "params": { 00:30:42.770 "name": "Nvme1", 00:30:42.770 "trtype": "tcp", 00:30:42.770 "traddr": "10.0.0.2", 00:30:42.770 "adrfam": "ipv4", 00:30:42.770 "trsvcid": "4420", 00:30:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.770 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.770 "hdgst": false, 00:30:42.770 "ddgst": false 00:30:42.770 }, 00:30:42.770 "method": "bdev_nvme_attach_controller" 00:30:42.770 }' 00:30:42.770 [2024-11-18 13:13:40.304846] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.770 [2024-11-18 13:13:40.304859] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.770 [2024-11-18 13:13:40.316846] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.770 [2024-11-18 13:13:40.316856] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.770 [2024-11-18 13:13:40.328850] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.770 [2024-11-18 13:13:40.328860] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.770 [2024-11-18 13:13:40.333893] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:30:42.771 [2024-11-18 13:13:40.333935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540911 ] 00:30:42.771 [2024-11-18 13:13:40.340844] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.771 [2024-11-18 13:13:40.340855] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.771 [2024-11-18 13:13:40.352851] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.771 [2024-11-18 13:13:40.352863] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.771 [2024-11-18 13:13:40.364846] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.771 [2024-11-18 13:13:40.364856] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.771 [2024-11-18 13:13:40.376844] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.771 [2024-11-18 13:13:40.376854] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.771 [2024-11-18 13:13:40.388846] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.771 [2024-11-18 13:13:40.388855] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.771 [2024-11-18 13:13:40.400844] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.771 [2024-11-18 13:13:40.400853] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.771 [2024-11-18 13:13:40.409728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.771 [2024-11-18 13:13:40.412846] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.771 [2024-11-18 13:13:40.412855] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.771 [2024-11-18 13:13:40.424845] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.771 [2024-11-18 13:13:40.424859] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.771 [2024-11-18 13:13:40.436850] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.771 [2024-11-18 13:13:40.436865] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.771 [2024-11-18 13:13:40.448844] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.771 [2024-11-18 13:13:40.448854] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.771 [2024-11-18 13:13:40.451698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.771 [2024-11-18 13:13:40.460845] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.771 [2024-11-18 13:13:40.460856] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.472857] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.472878] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.484849] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.484865] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.496847] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.496860] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.508850] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.508863] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.520847] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.520857] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.532855] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.532872] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.544854] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.544872] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.556852] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.556867] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.568852] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.568868] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.580850] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.580866] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.592854] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.592872] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 Running I/O for 5 seconds... 00:30:43.030 [2024-11-18 13:13:40.608851] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.608872] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.622477] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.622498] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.637878] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.637898] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.653106] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.653125] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.663578] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.663598] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.678717] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.678737] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.693848] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.693867] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.705044] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.030 [2024-11-18 13:13:40.705063] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.030 [2024-11-18 13:13:40.718901] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.031 [2024-11-18 13:13:40.718921] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.733998] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.734018] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.748962] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.748982] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.762654] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.762673] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.777671] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.777691] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.792756] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.792776] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.803911] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.803930] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.819225] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.819244] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.834637] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.834656] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.850078] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.850097] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.865094] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.865113] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.877764] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.877784] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.893247] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.893266] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.908463] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.908482] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.921218] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.921237] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.934449] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.934469] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.950103] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.950123] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.960103] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.960122] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.289 [2024-11-18 13:13:40.974823] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.289 [2024-11-18 13:13:40.974843] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:40.989752] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:40.989771] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.004854] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.004875] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.016616] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.016635] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.030866] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.030885] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.045822] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.045841] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.060573] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.060593] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.075144] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.075162] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.090228] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.090247] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.105142] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.105161] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.120860] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.120878] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.135164] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.135184] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.150310] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.150329] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.165233] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.165251] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.180727] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.180746] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.192674] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.192693] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.207275] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.207294] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.222736] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.222754] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.549 [2024-11-18 13:13:41.238366] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.549 [2024-11-18 13:13:41.238385] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.253104] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.253123] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.264581] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.264602] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.278942] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.278962] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.293984] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.294003] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.308937] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.308962] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.323049] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.323068] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.338140] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.338160] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.353084] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.353103] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.365877] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.365895] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.380796] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.380815] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.395319] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.395338] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.410312] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.410332] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.425501] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.425520] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.440721] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.440740] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.454157] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.454176] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.469296] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.469314] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.484422] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.484441] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.809 [2024-11-18 13:13:41.499204] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:43.809 [2024-11-18 13:13:41.499223] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.068 [2024-11-18 13:13:41.514278] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.068 [2024-11-18 13:13:41.514298] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.068 [2024-11-18 13:13:41.529467] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.068 [2024-11-18 13:13:41.529486] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.068 [2024-11-18 13:13:41.544702] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.068 [2024-11-18 13:13:41.544721] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.068 [2024-11-18 13:13:41.559232] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.068 [2024-11-18 13:13:41.559252] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.068 [2024-11-18 13:13:41.574536] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.068 [2024-11-18 13:13:41.574555] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.068 [2024-11-18 13:13:41.589524] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.068 [2024-11-18 13:13:41.589547] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.068 [2024-11-18 13:13:41.604596] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.068 [2024-11-18 13:13:41.604615] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.068 16310.00 IOPS, 127.42 MiB/s [2024-11-18T12:13:41.770Z] [2024-11-18 13:13:41.618889] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.068 [2024-11-18 13:13:41.618908] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.068 [2024-11-18 13:13:41.634004] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.068 [2024-11-18 13:13:41.634022] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.068 [2024-11-18 13:13:41.649191] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.068 [2024-11-18 13:13:41.649209] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.068 [2024-11-18 13:13:41.665117] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.069 [2024-11-18 13:13:41.665136] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.069 [2024-11-18 13:13:41.678376] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.069 [2024-11-18 13:13:41.678395] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.069 [2024-11-18 13:13:41.694041] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.069 [2024-11-18 13:13:41.694060] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.069 [2024-11-18 13:13:41.709179] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.069 [2024-11-18 13:13:41.709197] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.069 [2024-11-18 13:13:41.724604] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.069 [2024-11-18 13:13:41.724623] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.069 [2024-11-18 13:13:41.735942] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.069 [2024-11-18 13:13:41.735961] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.069 [2024-11-18 13:13:41.750617] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.069 [2024-11-18 13:13:41.750636] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.328 [2024-11-18 13:13:41.766257] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.328 [2024-11-18 13:13:41.766277] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.328 [2024-11-18 13:13:41.776504] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.328 [2024-11-18 13:13:41.776522] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.328 [2024-11-18 13:13:41.790528] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.328 [2024-11-18 13:13:41.790547] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.328 [2024-11-18 13:13:41.805218] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.328 [2024-11-18 13:13:41.805237] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.328 [2024-11-18 13:13:41.816846] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.328 [2024-11-18 13:13:41.816865] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.328 [2024-11-18 13:13:41.830656] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.328 [2024-11-18 13:13:41.830675] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.328 [2024-11-18 13:13:41.846233] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.328 [2024-11-18 13:13:41.846252] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.328 [2024-11-18 13:13:41.861564] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.328 [2024-11-18 13:13:41.861586] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.328 [2024-11-18 13:13:41.877014] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.328 [2024-11-18 13:13:41.877033] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.328 [2024-11-18 13:13:41.888641] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.328 [2024-11-18 13:13:41.888660] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.328 [2024-11-18 13:13:41.903496] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.328 [2024-11-18 13:13:41.903515] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.328 [2024-11-18 13:13:41.919017] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.328 [2024-11-18 13:13:41.919036] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.329 [2024-11-18 13:13:41.934300] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.329 [2024-11-18 13:13:41.934319] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.329 [2024-11-18 13:13:41.949835] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.329 [2024-11-18 13:13:41.949853] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.329 [2024-11-18 13:13:41.965120] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.329 [2024-11-18 13:13:41.965140] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.329 [2024-11-18 13:13:41.976262] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.329 [2024-11-18 13:13:41.976281] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.329 [2024-11-18 13:13:41.990844] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.329 [2024-11-18 13:13:41.990863] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.329 [2024-11-18 13:13:42.006585] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.329 [2024-11-18 13:13:42.006605] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.329 [2024-11-18 13:13:42.021878] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.329 [2024-11-18 13:13:42.021898] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.588 [2024-11-18 13:13:42.036805] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.588 [2024-11-18 13:13:42.036825] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.588 [2024-11-18 13:13:42.048308] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.588 [2024-11-18 13:13:42.048328] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.588 [2024-11-18 13:13:42.063082] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.588 [2024-11-18 13:13:42.063101] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.588 [2024-11-18 13:13:42.078489] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.588 [2024-11-18 13:13:42.078508] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.588 [2024-11-18 13:13:42.093759] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.588 [2024-11-18 13:13:42.093777] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.588 [2024-11-18 13:13:42.109840] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.588 [2024-11-18 13:13:42.109860] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.588 [2024-11-18 13:13:42.125001] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.588 [2024-11-18 13:13:42.125022] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.588 [2024-11-18 13:13:42.136307] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.588 [2024-11-18 13:13:42.136327] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.588 [2024-11-18 13:13:42.150846] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.588 [2024-11-18 13:13:42.150866] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.588 [2024-11-18 13:13:42.166095] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.588 [2024-11-18 13:13:42.166113] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.588 [2024-11-18 13:13:42.181591] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.588 [2024-11-18 13:13:42.181609] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.588 [2024-11-18 13:13:42.197070] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.588 [2024-11-18 13:13:42.197090] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.588 [2024-11-18 13:13:42.210143] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.588 [2024-11-18 13:13:42.210161] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.588 [2024-11-18 13:13:42.226010] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.588 [2024-11-18 13:13:42.226030] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.589 [2024-11-18 13:13:42.241081] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.589 [2024-11-18 13:13:42.241100] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.589 [2024-11-18 13:13:42.254121] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.589 [2024-11-18 13:13:42.254140] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.589 [2024-11-18 13:13:42.265265] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.589 [2024-11-18 13:13:42.265284] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.589 [2024-11-18 13:13:42.278559] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.589 [2024-11-18 13:13:42.278578] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.293596] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.293626] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.308859] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.308880] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.320079] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.320099] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.334550] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.334569] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.349864] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.349884] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.360800] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.360819] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.374589] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.374608] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.389462] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.389482] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.404573] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.404593] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.416946] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.416966] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.431198] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.431217] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.446423] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.446453] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.460974] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.460993] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.474747] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.474766] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.489833] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.489850] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.504819] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.504837] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.518504] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.518523] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:44.848 [2024-11-18 13:13:42.533521] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:44.848 [2024-11-18 13:13:42.533540] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.546624] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.546644] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.561939] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.561957] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.572454] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.572473] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.586748] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.586766] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.601996] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.602016] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 16302.50 IOPS, 127.36 MiB/s [2024-11-18T12:13:42.810Z] [2024-11-18 13:13:42.617062] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.617081] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.628561] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.628581] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.643274] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.643293] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.658335] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.658365] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.673712] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.673730] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.688980] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.688998] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.700507] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.700526] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.715096] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.715115] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.729899] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.729918] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.745045] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.745064] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.759099] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.759118] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.774371] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.774390] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.788933] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.788952] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.108 [2024-11-18 13:13:42.802168] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.108 [2024-11-18 13:13:42.802186] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:42.817211] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:42.817230] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:42.832768] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:42.832788] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:42.846882] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:42.846901] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:42.861750] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:42.861768] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:42.876993] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:42.877012] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:42.888566] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:42.888584] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:42.903136] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:42.903155] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:42.918440] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:42.918459] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:42.933160] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:42.933182] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:42.948049] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:42.948068] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:42.962597] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:42.962616] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:42.977635] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:42.977655] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:42.993110] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:42.993130] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:43.005621] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:43.005640] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:43.020910] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:43.020930] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:43.034795] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:43.034813] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:43.049931] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:43.049949] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.368 [2024-11-18 13:13:43.065110] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.368 [2024-11-18 13:13:43.065129] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.076235] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.076255] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.090775] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.090793] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.105203] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.105221] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.117678] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.117697] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.130502] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.130520] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.145689] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.145708] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.161043] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.161062] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.174662] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.174690] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.189785] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.189804] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.204556] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.204578] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.219377] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.219397] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.234295] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.234313] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.249189] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.249207] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.260574] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.260593] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.274921] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.274940] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.290306] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.290326] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.305517] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.305536] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.628 [2024-11-18 13:13:43.320603] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.628 [2024-11-18 13:13:43.320621] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.887 [2024-11-18 13:13:43.331743] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.887 [2024-11-18 13:13:43.331762] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.887 [2024-11-18 13:13:43.347102] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.887 [2024-11-18 13:13:43.347121] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.887 [2024-11-18 13:13:43.362045] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.887 [2024-11-18 13:13:43.362063] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.887 [2024-11-18 13:13:43.376780] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.887 [2024-11-18 13:13:43.376800] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.887 [2024-11-18 13:13:43.390071] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.887 [2024-11-18 13:13:43.390090] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.887 [2024-11-18 13:13:43.405695] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.887 [2024-11-18 13:13:43.405713] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.887 [2024-11-18 13:13:43.421830] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.887 [2024-11-18 13:13:43.421849] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.887 [2024-11-18 13:13:43.436816] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.887 [2024-11-18 13:13:43.436837] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.887 [2024-11-18 13:13:43.450688] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.887 [2024-11-18 13:13:43.450707] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.887 [2024-11-18 13:13:43.465906] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.887 [2024-11-18 13:13:43.465925] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.887 [2024-11-18 13:13:43.480772] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.887 [2024-11-18 13:13:43.480797] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.887 [2024-11-18 13:13:43.494711] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.887 [2024-11-18 13:13:43.494732] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.887 [2024-11-18 13:13:43.509605] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.887 [2024-11-18 13:13:43.509623] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.888 [2024-11-18 13:13:43.524844] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.888 [2024-11-18 13:13:43.524863] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.888 [2024-11-18 13:13:43.536349] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.888 [2024-11-18 13:13:43.536375] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.888 [2024-11-18 13:13:43.550997] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.888 [2024-11-18 13:13:43.551015] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.888 [2024-11-18 13:13:43.566175] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.888 [2024-11-18 13:13:43.566193] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:45.888 [2024-11-18 13:13:43.580801] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:45.888 [2024-11-18 13:13:43.580820] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 [2024-11-18 13:13:43.591566] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.147 [2024-11-18 13:13:43.591586] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 [2024-11-18 13:13:43.606603] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.147 [2024-11-18 13:13:43.606622] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 16345.00 IOPS, 127.70 MiB/s [2024-11-18T12:13:43.849Z] [2024-11-18 13:13:43.621361] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.147 [2024-11-18 13:13:43.621380] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 [2024-11-18 13:13:43.633061] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.147 [2024-11-18 13:13:43.633079] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 [2024-11-18 13:13:43.646802] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.147 [2024-11-18 13:13:43.646821] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 [2024-11-18 13:13:43.662028] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.147 [2024-11-18 13:13:43.662046] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 [2024-11-18 13:13:43.677002] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.147 [2024-11-18 13:13:43.677021] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 [2024-11-18 13:13:43.690730] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.147 [2024-11-18 13:13:43.690750] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 [2024-11-18 13:13:43.706268] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.147 [2024-11-18 13:13:43.706288] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 [2024-11-18 13:13:43.721295] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.147 [2024-11-18 13:13:43.721314] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 [2024-11-18 13:13:43.736687] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.147 [2024-11-18 13:13:43.736706] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 [2024-11-18 13:13:43.749985] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.147 [2024-11-18 13:13:43.750004] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 [2024-11-18 13:13:43.761518] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.147 [2024-11-18 13:13:43.761537] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 [2024-11-18 13:13:43.774548] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.147 [2024-11-18 13:13:43.774567] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.147 [2024-11-18 13:13:43.790237] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.148 [2024-11-18 13:13:43.790256] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.148 [2024-11-18 13:13:43.805298] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.148 [2024-11-18 13:13:43.805317] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.148 [2024-11-18 13:13:43.820666] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.148 [2024-11-18 13:13:43.820686] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.148 [2024-11-18 13:13:43.831631] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.148 [2024-11-18 13:13:43.831650] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.407 [2024-11-18 13:13:43.846760] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.407 [2024-11-18 13:13:43.846779] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.407 [2024-11-18 13:13:43.862323] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.407 [2024-11-18 13:13:43.862342] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.407 [2024-11-18 13:13:43.877712] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.407 [2024-11-18 13:13:43.877731] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.407 [2024-11-18 13:13:43.888262] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.407 [2024-11-18 13:13:43.888281] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.407 [2024-11-18 13:13:43.902918] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.407 [2024-11-18 13:13:43.902937] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.407 [2024-11-18 13:13:43.918534] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.407 [2024-11-18 13:13:43.918552] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.407 [2024-11-18 13:13:43.933452] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.407 [2024-11-18 13:13:43.933469] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.407 [2024-11-18 13:13:43.948462] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.407 [2024-11-18 13:13:43.948481] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.407 [2024-11-18 13:13:43.961843] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.407 [2024-11-18 13:13:43.961862] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.408 [2024-11-18 13:13:43.974430] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.408 [2024-11-18 13:13:43.974448] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.408 [2024-11-18 13:13:43.989977] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.408 [2024-11-18 13:13:43.989995] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.408 [2024-11-18 13:13:44.004982] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.408 [2024-11-18 13:13:44.005001] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.408 [2024-11-18 13:13:44.016222] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.408 [2024-11-18 13:13:44.016242] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.408 [2024-11-18 13:13:44.031342] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.408 [2024-11-18 13:13:44.031368] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.408 [2024-11-18 13:13:44.046390] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.408 [2024-11-18 13:13:44.046408] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.408 [2024-11-18 13:13:44.061697] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.408 [2024-11-18 13:13:44.061716] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.408 [2024-11-18 13:13:44.076728] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.408 [2024-11-18 13:13:44.076746] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.408 [2024-11-18 13:13:44.090616] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.408 [2024-11-18 13:13:44.090635] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.667 [2024-11-18 13:13:44.105742] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.667 [2024-11-18 13:13:44.105761] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.667 [2024-11-18 13:13:44.121294] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.667 [2024-11-18 13:13:44.121312] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.667 [2024-11-18 13:13:44.137206] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.667 [2024-11-18 13:13:44.137224] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.667 [2024-11-18 13:13:44.153348] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.667 [2024-11-18 13:13:44.153373] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.667 [2024-11-18 13:13:44.168788] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.667 [2024-11-18 13:13:44.168807] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.668 [2024-11-18 13:13:44.180108] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.668 [2024-11-18 13:13:44.180127] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.668 [2024-11-18 13:13:44.195186] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.668 [2024-11-18 13:13:44.195205] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.668 [2024-11-18 13:13:44.210647] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.668 [2024-11-18 13:13:44.210666] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.668 [2024-11-18 13:13:44.225782] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.668 [2024-11-18 13:13:44.225801] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.668 [2024-11-18 13:13:44.240942] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.668 [2024-11-18 13:13:44.240960] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.668 [2024-11-18 13:13:44.252792] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.668 [2024-11-18 13:13:44.252810] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.668 [2024-11-18 13:13:44.267136] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.668 [2024-11-18 13:13:44.267156] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.668 [2024-11-18 13:13:44.282545] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.668 [2024-11-18 13:13:44.282565] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.668 [2024-11-18 13:13:44.297580] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.668 [2024-11-18 13:13:44.297599] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.668 [2024-11-18 13:13:44.313201] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.668 [2024-11-18 13:13:44.313220] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.668 [2024-11-18 13:13:44.328411] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.668 [2024-11-18 13:13:44.328430] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.668 [2024-11-18 13:13:44.341864] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.668 [2024-11-18 13:13:44.341882] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.668 [2024-11-18 13:13:44.356794] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.668 [2024-11-18 13:13:44.356812] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.370129] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.370148] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.385479] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.385498] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.400493] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.400513] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.415137] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.415156] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.430781] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.430800] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.445990] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.446009] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.461570] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.461589] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.476706] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.476725] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.487641] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.487659] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.503310] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.503329] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.518775] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.518794] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.534181] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.534199] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.549494] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.549512] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.564560] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.564584] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.578786] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.578805] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.594384] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.594402] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 [2024-11-18 13:13:44.609711] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.609730] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.928 16316.50 IOPS, 127.47 MiB/s [2024-11-18T12:13:44.630Z] [2024-11-18 13:13:44.624875] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.928 [2024-11-18 13:13:44.624894] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.636646] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.636665] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.651096] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.651114] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.666702] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.666721] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.682153] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.682172] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.697524] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.697543] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.712515] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.712534] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.725487] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.725505] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.741436] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.741456] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.757516] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.757535] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.772995] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.773014] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.784315] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.784334] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.798741] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.798760] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.813943] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.813961] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.828904] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.828923] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.842115] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.842142] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.857083] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.857102] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.869542] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.869560] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.188 [2024-11-18 13:13:44.882999] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.188 [2024-11-18 13:13:44.883018] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:44.898420] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:44.898440] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:44.913398] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:44.913418] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:44.924942] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:44.924961] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:44.938606] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:44.938626] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:44.953668] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:44.953687] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:44.968960] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:44.968979] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:44.980698] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:44.980717] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:44.995006] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:44.995025] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:45.010332] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:45.010359] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:45.025320] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:45.025340] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:45.041792] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:45.041814] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:45.057016] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:45.057036] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:45.068343] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:45.068368] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:45.083034] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:45.083053] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:45.098029] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:45.098048] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:45.112945] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:45.112968] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:45.123064] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:45.123083] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.447 [2024-11-18 13:13:45.138097] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.447 [2024-11-18 13:13:45.138117] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.148499] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.148520] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.162795] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.162815] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.177925] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.177945] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.193167] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.193186] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.205889] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.205907] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.217410] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.217428] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.230310] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.230329] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.245499] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.245518] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.260557] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.260576] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.274024] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.274043] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.289260] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.289279] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.304512] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.304531] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.319129] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.319149] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.333877] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.333895] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.349093] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.349112] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.361642] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.361661] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.374513] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.374531] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.707 [2024-11-18 13:13:45.389909] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.707 [2024-11-18 13:13:45.389927] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.966 [2024-11-18 13:13:45.404842] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.966 [2024-11-18 13:13:45.404863] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.966 [2024-11-18 13:13:45.416130] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.966 [2024-11-18 13:13:45.416149] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.966 [2024-11-18 13:13:45.431189] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.966 [2024-11-18 13:13:45.431207] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.966 [2024-11-18 13:13:45.446350] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.966 [2024-11-18 13:13:45.446375] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.966 [2024-11-18 13:13:45.461325] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.966 [2024-11-18 13:13:45.461343] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.966 [2024-11-18 13:13:45.477255] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.966 [2024-11-18 13:13:45.477274] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.966 [2024-11-18 13:13:45.493170] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.966 [2024-11-18 13:13:45.493189] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.966 [2024-11-18 13:13:45.508993] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.966 [2024-11-18 13:13:45.509013] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.966 [2024-11-18 13:13:45.519368] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.966 [2024-11-18 13:13:45.519402] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.967 [2024-11-18 13:13:45.534777] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.967 [2024-11-18 13:13:45.534795] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.967 [2024-11-18 13:13:45.549914] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.967 [2024-11-18 13:13:45.549932] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.967 [2024-11-18 13:13:45.565081] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.967 [2024-11-18 13:13:45.565099] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.967 [2024-11-18 13:13:45.575660] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.967 [2024-11-18 13:13:45.575679] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.967 [2024-11-18 13:13:45.591028] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.967 [2024-11-18 13:13:45.591048] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.967 [2024-11-18 13:13:45.605862] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.967 [2024-11-18 13:13:45.605881] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.967 16316.80 IOPS, 127.47 MiB/s [2024-11-18T12:13:45.669Z] [2024-11-18 13:13:45.620728] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.967 [2024-11-18 13:13:45.620747] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.967 00:30:47.967 Latency(us) 00:30:47.967 [2024-11-18T12:13:45.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.967 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:47.967 Nvme1n1 : 5.01 16321.46 127.51 0.00 0.00 7834.84 2122.80 12936.24 00:30:47.967 [2024-11-18T12:13:45.669Z] =================================================================================================================== 00:30:47.967 [2024-11-18T12:13:45.669Z] Total : 16321.46 127.51 0.00 0.00 7834.84 2122.80 12936.24 00:30:47.967 [2024-11-18 13:13:45.628852] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.967 [2024-11-18 13:13:45.628869] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.967 [2024-11-18 13:13:45.640855] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.967 [2024-11-18 13:13:45.640871] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.967 [2024-11-18 13:13:45.652862] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.967 [2024-11-18 13:13:45.652877] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.226 [2024-11-18 13:13:45.664853] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.226 [2024-11-18 13:13:45.664869] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.226 [2024-11-18 13:13:45.676852] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.226 [2024-11-18 13:13:45.676864] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.226 [2024-11-18 13:13:45.688849] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.226 [2024-11-18 13:13:45.688862] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.226 [2024-11-18 13:13:45.700847] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.226 [2024-11-18 13:13:45.700860] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.226 [2024-11-18 13:13:45.712848] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.226 [2024-11-18 13:13:45.712861] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.226 [2024-11-18 13:13:45.724848] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.226 [2024-11-18 13:13:45.724861] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.226 [2024-11-18 13:13:45.736845] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.226 [2024-11-18 13:13:45.736855] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.226 [2024-11-18 13:13:45.748848] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.226 [2024-11-18 13:13:45.748858] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.226 [2024-11-18 13:13:45.760846] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.226 [2024-11-18 13:13:45.760857] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.226 [2024-11-18 13:13:45.772845] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.226 [2024-11-18 13:13:45.772855] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.226 [2024-11-18 13:13:45.784845] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.226 [2024-11-18 13:13:45.784855] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2540911) - No such process 00:30:48.226 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2540911 00:30:48.226 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.226 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.226 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:48.226 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.227 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:48.227 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.227 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:48.227 delay0 00:30:48.227 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.227 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:48.227 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.227 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:48.227 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.227 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:48.486 [2024-11-18 13:13:45.928693] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:55.057 [2024-11-18 13:13:52.349225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2514b40 is same with the state(6) to be set 00:30:55.057 [2024-11-18 13:13:52.349262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2514b40 is same with the state(6) to be set 00:30:55.057 Initializing NVMe Controllers 00:30:55.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:55.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:55.057 Initialization complete. Launching workers. 00:30:55.057 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 910 00:30:55.057 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1180, failed to submit 50 00:30:55.057 success 1052, unsuccessful 128, failed 0 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:55.057 rmmod nvme_tcp 00:30:55.057 rmmod nvme_fabrics 00:30:55.057 rmmod nvme_keyring 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2539209 ']' 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2539209 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2539209 ']' 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2539209 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2539209 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2539209' 00:30:55.057 killing process with pid 2539209 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2539209 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2539209 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.057 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.042 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:57.042 00:30:57.042 real 0m31.659s 00:30:57.042 user 0m41.064s 00:30:57.042 sys 0m12.301s 00:30:57.042 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:57.042 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:57.042 ************************************ 00:30:57.042 END TEST nvmf_zcopy 00:30:57.042 ************************************ 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:57.302 ************************************ 00:30:57.302 START TEST nvmf_nmic 00:30:57.302 ************************************ 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:57.302 * Looking for test storage... 00:30:57.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:57.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.302 --rc genhtml_branch_coverage=1 00:30:57.302 --rc genhtml_function_coverage=1 00:30:57.302 --rc genhtml_legend=1 00:30:57.302 --rc geninfo_all_blocks=1 00:30:57.302 --rc geninfo_unexecuted_blocks=1 00:30:57.302 00:30:57.302 ' 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:57.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.302 --rc genhtml_branch_coverage=1 00:30:57.302 --rc genhtml_function_coverage=1 00:30:57.302 --rc genhtml_legend=1 00:30:57.302 --rc geninfo_all_blocks=1 00:30:57.302 --rc geninfo_unexecuted_blocks=1 00:30:57.302 00:30:57.302 ' 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:57.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.302 --rc genhtml_branch_coverage=1 00:30:57.302 --rc genhtml_function_coverage=1 00:30:57.302 --rc genhtml_legend=1 00:30:57.302 --rc geninfo_all_blocks=1 00:30:57.302 --rc geninfo_unexecuted_blocks=1 00:30:57.302 00:30:57.302 ' 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:57.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.302 --rc genhtml_branch_coverage=1 00:30:57.302 --rc genhtml_function_coverage=1 00:30:57.302 --rc genhtml_legend=1 00:30:57.302 --rc geninfo_all_blocks=1 00:30:57.302 --rc geninfo_unexecuted_blocks=1 00:30:57.302 00:30:57.302 ' 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.302 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:57.303 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:57.563 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:57.563 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:57.563 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:57.563 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:57.563 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.563 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:57.563 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:57.563 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:57.563 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.563 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.563 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.563 13:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:57.563 13:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:57.563 13:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:57.563 13:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:04.137 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:04.137 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:04.137 Found net devices under 0000:86:00.0: cvl_0_0 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.137 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:04.138 Found net devices under 0000:86:00.1: cvl_0_1 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:04.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:31:04.138 00:31:04.138 --- 10.0.0.2 ping statistics --- 00:31:04.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.138 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:31:04.138 00:31:04.138 --- 10.0.0.1 ping statistics --- 00:31:04.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.138 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2546264 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2546264 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2546264 ']' 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:04.138 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:04.138 [2024-11-18 13:14:00.970581] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:04.138 [2024-11-18 13:14:00.971473] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:31:04.138 [2024-11-18 13:14:00.971505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.138 [2024-11-18 13:14:01.037652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:04.138 [2024-11-18 13:14:01.082053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.138 [2024-11-18 13:14:01.082092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.138 [2024-11-18 13:14:01.082099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.138 [2024-11-18 13:14:01.082105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.138 [2024-11-18 13:14:01.082111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.138 [2024-11-18 13:14:01.085372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.138 [2024-11-18 13:14:01.085411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:04.138 [2024-11-18 13:14:01.085518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.138 [2024-11-18 13:14:01.085519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:04.138 [2024-11-18 13:14:01.152593] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:04.138 [2024-11-18 13:14:01.152973] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:04.138 [2024-11-18 13:14:01.153825] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:04.138 [2024-11-18 13:14:01.153948] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:04.138 [2024-11-18 13:14:01.154004] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:04.138 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:04.138 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:31:04.138 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:04.138 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:04.138 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:04.138 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.138 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:04.138 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.138 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:04.138 [2024-11-18 13:14:01.234212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.138 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.138 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:04.138 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.138 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:04.139 Malloc0 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:04.139 [2024-11-18 13:14:01.318408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:04.139 test case1: single bdev can't be used in multiple subsystems 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:04.139 [2024-11-18 13:14:01.345923] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:04.139 [2024-11-18 13:14:01.345948] subsystem.c:2300:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:04.139 [2024-11-18 13:14:01.345957] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.139 request: 00:31:04.139 { 00:31:04.139 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:04.139 "namespace": { 00:31:04.139 "bdev_name": "Malloc0", 00:31:04.139 "no_auto_visible": false 00:31:04.139 }, 00:31:04.139 "method": "nvmf_subsystem_add_ns", 00:31:04.139 "req_id": 1 00:31:04.139 } 00:31:04.139 Got JSON-RPC error response 00:31:04.139 response: 00:31:04.139 { 00:31:04.139 "code": -32602, 00:31:04.139 "message": "Invalid parameters" 00:31:04.139 } 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:04.139 Adding namespace failed - expected result. 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:04.139 test case2: host connect to nvmf target in multiple paths 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:04.139 [2024-11-18 13:14:01.358011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:31:04.139 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:31:06.674 13:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:31:06.674 13:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:31:06.674 13:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:31:06.674 13:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:31:06.674 13:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:31:06.674 13:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:31:06.674 13:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:06.674 [global] 00:31:06.674 thread=1 00:31:06.674 invalidate=1 00:31:06.674 rw=write 00:31:06.674 time_based=1 00:31:06.674 runtime=1 00:31:06.674 ioengine=libaio 00:31:06.674 direct=1 00:31:06.674 bs=4096 00:31:06.674 iodepth=1 00:31:06.674 norandommap=0 00:31:06.674 numjobs=1 00:31:06.674 00:31:06.674 verify_dump=1 00:31:06.674 verify_backlog=512 00:31:06.674 verify_state_save=0 00:31:06.674 do_verify=1 00:31:06.674 verify=crc32c-intel 00:31:06.674 [job0] 00:31:06.674 filename=/dev/nvme0n1 00:31:06.674 Could not set queue depth (nvme0n1) 00:31:06.674 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:06.674 fio-3.35 00:31:06.674 Starting 1 thread 00:31:07.612 00:31:07.612 job0: (groupid=0, jobs=1): err= 0: pid=2546921: Mon Nov 18 13:14:05 2024 00:31:07.612 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:31:07.612 slat (nsec): min=9058, max=22797, avg=11323.18, stdev=4519.67 00:31:07.613 clat (usec): min=40899, max=41272, avg=41001.47, stdev=76.96 00:31:07.613 lat (usec): min=40921, max=41282, avg=41012.80, stdev=75.97 00:31:07.613 clat percentiles (usec): 00:31:07.613 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:07.613 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:07.613 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:07.613 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:07.613 | 99.99th=[41157] 00:31:07.613 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:31:07.613 slat (usec): min=9, max=26840, avg=63.23, stdev=1185.74 00:31:07.613 clat (usec): min=129, max=322, avg=146.27, stdev=27.11 00:31:07.613 lat (usec): min=140, max=27134, avg=209.50, stdev=1192.54 00:31:07.613 clat percentiles (usec): 00:31:07.613 | 1.00th=[ 133], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:31:07.613 | 30.00th=[ 137], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 139], 00:31:07.613 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 157], 95.00th=[ 241], 00:31:07.613 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 322], 99.95th=[ 322], 00:31:07.613 | 99.99th=[ 322] 00:31:07.613 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:07.613 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:07.613 lat (usec) : 250=95.51%, 500=0.37% 00:31:07.613 lat (msec) : 50=4.12% 00:31:07.613 cpu : usr=0.30%, sys=0.40%, ctx=538, majf=0, minf=1 00:31:07.613 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.613 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.613 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:07.613 00:31:07.613 Run status group 0 (all jobs): 00:31:07.613 READ: bw=87.0KiB/s (89.1kB/s), 87.0KiB/s-87.0KiB/s (89.1kB/s-89.1kB/s), io=88.0KiB (90.1kB), run=1011-1011msec 00:31:07.613 WRITE: bw=2026KiB/s (2074kB/s), 2026KiB/s-2026KiB/s (2074kB/s-2074kB/s), io=2048KiB (2097kB), run=1011-1011msec 00:31:07.613 00:31:07.613 Disk stats (read/write): 00:31:07.613 nvme0n1: ios=45/512, merge=0/0, ticks=1769/70, in_queue=1839, util=98.50% 00:31:07.613 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:07.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:07.872 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:07.872 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:31:07.872 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:31:07.872 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:07.872 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:07.873 rmmod nvme_tcp 00:31:07.873 rmmod nvme_fabrics 00:31:07.873 rmmod nvme_keyring 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2546264 ']' 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2546264 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2546264 ']' 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2546264 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2546264 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2546264' 00:31:07.873 killing process with pid 2546264 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2546264 00:31:07.873 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2546264 00:31:08.132 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:08.132 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:08.132 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:08.132 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:08.132 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:08.132 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:08.132 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:08.132 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:08.132 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:08.132 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.132 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.132 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.671 13:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:10.671 00:31:10.671 real 0m13.010s 00:31:10.671 user 0m23.925s 00:31:10.671 sys 0m5.848s 00:31:10.671 13:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:10.671 13:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:10.671 ************************************ 00:31:10.671 END TEST nvmf_nmic 00:31:10.671 ************************************ 00:31:10.671 13:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:10.671 13:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:10.671 13:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:10.671 13:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:10.671 ************************************ 00:31:10.671 START TEST nvmf_fio_target 00:31:10.671 ************************************ 00:31:10.671 13:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:10.671 * Looking for test storage... 00:31:10.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:10.671 13:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:10.671 13:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:31:10.671 13:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:10.671 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:10.671 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:10.671 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:10.671 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:10.671 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:10.671 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:10.671 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:10.671 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:10.671 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.672 --rc genhtml_branch_coverage=1 00:31:10.672 --rc genhtml_function_coverage=1 00:31:10.672 --rc genhtml_legend=1 00:31:10.672 --rc geninfo_all_blocks=1 00:31:10.672 --rc geninfo_unexecuted_blocks=1 00:31:10.672 00:31:10.672 ' 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.672 --rc genhtml_branch_coverage=1 00:31:10.672 --rc genhtml_function_coverage=1 00:31:10.672 --rc genhtml_legend=1 00:31:10.672 --rc geninfo_all_blocks=1 00:31:10.672 --rc geninfo_unexecuted_blocks=1 00:31:10.672 00:31:10.672 ' 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.672 --rc genhtml_branch_coverage=1 00:31:10.672 --rc genhtml_function_coverage=1 00:31:10.672 --rc genhtml_legend=1 00:31:10.672 --rc geninfo_all_blocks=1 00:31:10.672 --rc geninfo_unexecuted_blocks=1 00:31:10.672 00:31:10.672 ' 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.672 --rc genhtml_branch_coverage=1 00:31:10.672 --rc genhtml_function_coverage=1 00:31:10.672 --rc genhtml_legend=1 00:31:10.672 --rc geninfo_all_blocks=1 00:31:10.672 --rc geninfo_unexecuted_blocks=1 00:31:10.672 00:31:10.672 ' 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:10.672 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:10.673 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:10.673 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.673 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.673 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.673 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:10.673 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:10.673 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:10.673 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:17.249 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:17.249 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:17.249 Found net devices under 0000:86:00.0: cvl_0_0 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.249 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:17.250 Found net devices under 0000:86:00.1: cvl_0_1 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:17.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:31:17.250 00:31:17.250 --- 10.0.0.2 ping statistics --- 00:31:17.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.250 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:31:17.250 00:31:17.250 --- 10.0.0.1 ping statistics --- 00:31:17.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.250 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2550624 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2550624 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2550624 ']' 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:17.250 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:17.250 [2024-11-18 13:14:14.039062] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:17.250 [2024-11-18 13:14:14.040045] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:31:17.250 [2024-11-18 13:14:14.040084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.250 [2024-11-18 13:14:14.121200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:17.250 [2024-11-18 13:14:14.162733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.250 [2024-11-18 13:14:14.162771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.250 [2024-11-18 13:14:14.162778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.250 [2024-11-18 13:14:14.162784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.250 [2024-11-18 13:14:14.162789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.250 [2024-11-18 13:14:14.164402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.250 [2024-11-18 13:14:14.164452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:17.250 [2024-11-18 13:14:14.164562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.250 [2024-11-18 13:14:14.164562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:17.250 [2024-11-18 13:14:14.232757] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:17.250 [2024-11-18 13:14:14.233431] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:17.250 [2024-11-18 13:14:14.233808] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:17.250 [2024-11-18 13:14:14.234176] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:17.250 [2024-11-18 13:14:14.234229] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:17.250 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:17.250 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:31:17.250 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:17.250 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:17.250 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:17.250 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.250 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:17.510 [2024-11-18 13:14:15.097409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.510 13:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:17.769 13:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:17.769 13:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:18.029 13:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:18.029 13:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:18.288 13:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:18.288 13:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:18.548 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:18.548 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:18.548 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:18.807 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:18.807 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:19.067 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:19.067 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:19.326 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:19.326 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:19.585 13:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:19.585 13:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:19.585 13:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:19.843 13:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:19.843 13:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:20.103 13:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.103 [2024-11-18 13:14:17.781312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.362 13:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:20.362 13:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:20.621 13:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:20.880 13:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:20.880 13:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:31:20.880 13:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:31:20.880 13:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:31:20.880 13:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:31:20.880 13:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:31:22.785 13:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:31:22.785 13:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:31:22.785 13:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:31:22.785 13:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:31:22.785 13:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:31:22.785 13:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:31:22.785 13:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:22.785 [global] 00:31:22.785 thread=1 00:31:22.785 invalidate=1 00:31:22.785 rw=write 00:31:22.785 time_based=1 00:31:22.785 runtime=1 00:31:22.785 ioengine=libaio 00:31:22.785 direct=1 00:31:22.785 bs=4096 00:31:22.785 iodepth=1 00:31:22.785 norandommap=0 00:31:22.785 numjobs=1 00:31:22.785 00:31:22.785 verify_dump=1 00:31:22.785 verify_backlog=512 00:31:22.785 verify_state_save=0 00:31:22.785 do_verify=1 00:31:22.785 verify=crc32c-intel 00:31:22.785 [job0] 00:31:22.785 filename=/dev/nvme0n1 00:31:22.785 [job1] 00:31:22.785 filename=/dev/nvme0n2 00:31:22.785 [job2] 00:31:22.785 filename=/dev/nvme0n3 00:31:23.068 [job3] 00:31:23.068 filename=/dev/nvme0n4 00:31:23.068 Could not set queue depth (nvme0n1) 00:31:23.068 Could not set queue depth (nvme0n2) 00:31:23.068 Could not set queue depth (nvme0n3) 00:31:23.068 Could not set queue depth (nvme0n4) 00:31:23.329 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:23.329 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:23.329 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:23.329 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:23.329 fio-3.35 00:31:23.329 Starting 4 threads 00:31:24.705 00:31:24.705 job0: (groupid=0, jobs=1): err= 0: pid=2551961: Mon Nov 18 13:14:22 2024 00:31:24.705 read: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec) 00:31:24.705 slat (nsec): min=6866, max=23558, avg=8051.46, stdev=1426.21 00:31:24.705 clat (usec): min=191, max=1911, avg=248.86, stdev=70.68 00:31:24.706 lat (usec): min=198, max=1919, avg=256.91, stdev=70.74 00:31:24.706 clat percentiles (usec): 00:31:24.706 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:31:24.706 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 245], 00:31:24.706 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 396], 00:31:24.706 | 99.00th=[ 437], 99.50th=[ 437], 99.90th=[ 1450], 99.95th=[ 1565], 00:31:24.706 | 99.99th=[ 1909] 00:31:24.706 write: IOPS=2513, BW=9.82MiB/s (10.3MB/s)(9.83MiB/1001msec); 0 zone resets 00:31:24.706 slat (nsec): min=9888, max=39267, avg=11277.29, stdev=1736.35 00:31:24.706 clat (usec): min=128, max=1493, avg=171.78, stdev=33.56 00:31:24.706 lat (usec): min=139, max=1503, avg=183.06, stdev=33.70 00:31:24.706 clat percentiles (usec): 00:31:24.706 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:31:24.706 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:31:24.706 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 208], 00:31:24.706 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 285], 99.95th=[ 388], 00:31:24.706 | 99.99th=[ 1500] 00:31:24.706 bw ( KiB/s): min= 8870, max= 8870, per=28.66%, avg=8870.00, stdev= 0.00, samples=1 00:31:24.706 iops : min= 2217, max= 2217, avg=2217.00, stdev= 0.00, samples=1 00:31:24.706 lat (usec) : 250=87.69%, 500=12.23% 00:31:24.706 lat (msec) : 2=0.09% 00:31:24.706 cpu : usr=4.00%, sys=6.90%, ctx=4564, majf=0, minf=1 00:31:24.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.706 issued rwts: total=2048,2516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:24.706 job1: (groupid=0, jobs=1): err= 0: pid=2551962: Mon Nov 18 13:14:22 2024 00:31:24.706 read: IOPS=627, BW=2508KiB/s (2569kB/s)(2536KiB/1011msec) 00:31:24.706 slat (nsec): min=7218, max=25898, avg=8644.81, stdev=1953.33 00:31:24.706 clat (usec): min=209, max=41113, avg=1244.15, stdev=6174.42 00:31:24.706 lat (usec): min=217, max=41122, avg=1252.79, stdev=6174.97 00:31:24.706 clat percentiles (usec): 00:31:24.706 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 255], 00:31:24.706 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:31:24.706 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 334], 95.00th=[ 424], 00:31:24.706 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:24.706 | 99.99th=[41157] 00:31:24.706 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:31:24.706 slat (nsec): min=10626, max=37616, avg=12549.66, stdev=2199.80 00:31:24.706 clat (usec): min=143, max=448, avg=193.35, stdev=31.63 00:31:24.706 lat (usec): min=155, max=460, avg=205.90, stdev=31.94 00:31:24.706 clat percentiles (usec): 00:31:24.706 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:31:24.706 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 188], 00:31:24.706 | 70.00th=[ 198], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 249], 00:31:24.706 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 351], 99.95th=[ 449], 00:31:24.706 | 99.99th=[ 449] 00:31:24.706 bw ( KiB/s): min= 4087, max= 4096, per=13.22%, avg=4091.50, stdev= 6.36, samples=2 00:31:24.706 iops : min= 1021, max= 1024, avg=1022.50, stdev= 2.12, samples=2 00:31:24.706 lat (usec) : 250=64.78%, 500=34.02%, 750=0.30% 00:31:24.706 lat (msec) : 50=0.90% 00:31:24.706 cpu : usr=1.58%, sys=2.57%, ctx=1659, majf=0, minf=1 00:31:24.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.706 issued rwts: total=634,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:24.706 job2: (groupid=0, jobs=1): err= 0: pid=2551963: Mon Nov 18 13:14:22 2024 00:31:24.706 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:24.706 slat (nsec): min=6762, max=30191, avg=7695.30, stdev=1018.28 00:31:24.706 clat (usec): min=212, max=40878, avg=270.92, stdev=909.19 00:31:24.706 lat (usec): min=220, max=40890, avg=278.61, stdev=909.31 00:31:24.706 clat percentiles (usec): 00:31:24.706 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:31:24.706 | 30.00th=[ 245], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:31:24.706 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 255], 95.00th=[ 262], 00:31:24.706 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 457], 99.95th=[ 6718], 00:31:24.706 | 99.99th=[40633] 00:31:24.706 write: IOPS=2301, BW=9207KiB/s (9428kB/s)(9216KiB/1001msec); 0 zone resets 00:31:24.706 slat (nsec): min=9742, max=41677, avg=11195.40, stdev=1642.47 00:31:24.706 clat (usec): min=138, max=279, avg=171.05, stdev=13.93 00:31:24.706 lat (usec): min=149, max=316, avg=182.25, stdev=14.44 00:31:24.706 clat percentiles (usec): 00:31:24.706 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:31:24.706 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:31:24.706 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:31:24.706 | 99.00th=[ 210], 99.50th=[ 217], 99.90th=[ 265], 99.95th=[ 273], 00:31:24.706 | 99.99th=[ 281] 00:31:24.706 bw ( KiB/s): min=10075, max=10075, per=32.55%, avg=10075.00, stdev= 0.00, samples=1 00:31:24.706 iops : min= 2518, max= 2518, avg=2518.00, stdev= 0.00, samples=1 00:31:24.706 lat (usec) : 250=85.20%, 500=14.75% 00:31:24.706 lat (msec) : 10=0.02%, 50=0.02% 00:31:24.706 cpu : usr=1.80%, sys=4.90%, ctx=4355, majf=0, minf=1 00:31:24.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.706 issued rwts: total=2048,2304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:24.706 job3: (groupid=0, jobs=1): err= 0: pid=2551964: Mon Nov 18 13:14:22 2024 00:31:24.706 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:31:24.706 slat (nsec): min=8027, max=46745, avg=8899.20, stdev=1668.04 00:31:24.706 clat (usec): min=207, max=41160, avg=344.84, stdev=1802.31 00:31:24.706 lat (usec): min=225, max=41169, avg=353.74, stdev=1802.51 00:31:24.706 clat percentiles (usec): 00:31:24.706 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:31:24.706 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:31:24.706 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 343], 00:31:24.706 | 99.00th=[ 494], 99.50th=[ 506], 99.90th=[41157], 99.95th=[41157], 00:31:24.706 | 99.99th=[41157] 00:31:24.706 write: IOPS=1977, BW=7908KiB/s (8098kB/s)(7916KiB/1001msec); 0 zone resets 00:31:24.706 slat (nsec): min=8953, max=37658, avg=12575.15, stdev=1934.92 00:31:24.706 clat (usec): min=153, max=1388, avg=213.34, stdev=51.54 00:31:24.706 lat (usec): min=165, max=1402, avg=225.92, stdev=51.71 00:31:24.706 clat percentiles (usec): 00:31:24.706 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:31:24.706 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 196], 60.00th=[ 204], 00:31:24.706 | 70.00th=[ 229], 80.00th=[ 253], 90.00th=[ 285], 95.00th=[ 297], 00:31:24.706 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 490], 99.95th=[ 1385], 00:31:24.706 | 99.99th=[ 1385] 00:31:24.706 bw ( KiB/s): min= 8606, max= 8606, per=27.80%, avg=8606.00, stdev= 0.00, samples=1 00:31:24.706 iops : min= 2151, max= 2151, avg=2151.00, stdev= 0.00, samples=1 00:31:24.706 lat (usec) : 250=66.00%, 500=33.63%, 750=0.26% 00:31:24.706 lat (msec) : 2=0.03%, 50=0.09% 00:31:24.706 cpu : usr=1.90%, sys=4.20%, ctx=3515, majf=0, minf=1 00:31:24.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.706 issued rwts: total=1536,1979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:24.706 00:31:24.706 Run status group 0 (all jobs): 00:31:24.706 READ: bw=24.2MiB/s (25.4MB/s), 2508KiB/s-8192KiB/s (2569kB/s-8389kB/s), io=24.5MiB (25.7MB), run=1000-1011msec 00:31:24.706 WRITE: bw=30.2MiB/s (31.7MB/s), 4051KiB/s-9.82MiB/s (4149kB/s-10.3MB/s), io=30.6MiB (32.0MB), run=1001-1011msec 00:31:24.706 00:31:24.706 Disk stats (read/write): 00:31:24.706 nvme0n1: ios=1840/2048, merge=0/0, ticks=458/346, in_queue=804, util=86.87% 00:31:24.706 nvme0n2: ios=541/1011, merge=0/0, ticks=1616/189, in_queue=1805, util=98.17% 00:31:24.706 nvme0n3: ios=1859/2048, merge=0/0, ticks=1439/340, in_queue=1779, util=98.12% 00:31:24.706 nvme0n4: ios=1518/1536, merge=0/0, ticks=472/312, in_queue=784, util=89.71% 00:31:24.706 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:24.706 [global] 00:31:24.706 thread=1 00:31:24.706 invalidate=1 00:31:24.706 rw=randwrite 00:31:24.706 time_based=1 00:31:24.706 runtime=1 00:31:24.706 ioengine=libaio 00:31:24.706 direct=1 00:31:24.706 bs=4096 00:31:24.706 iodepth=1 00:31:24.706 norandommap=0 00:31:24.706 numjobs=1 00:31:24.706 00:31:24.707 verify_dump=1 00:31:24.707 verify_backlog=512 00:31:24.707 verify_state_save=0 00:31:24.707 do_verify=1 00:31:24.707 verify=crc32c-intel 00:31:24.707 [job0] 00:31:24.707 filename=/dev/nvme0n1 00:31:24.707 [job1] 00:31:24.707 filename=/dev/nvme0n2 00:31:24.707 [job2] 00:31:24.707 filename=/dev/nvme0n3 00:31:24.707 [job3] 00:31:24.707 filename=/dev/nvme0n4 00:31:24.707 Could not set queue depth (nvme0n1) 00:31:24.707 Could not set queue depth (nvme0n2) 00:31:24.707 Could not set queue depth (nvme0n3) 00:31:24.707 Could not set queue depth (nvme0n4) 00:31:24.707 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.707 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.707 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.707 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:24.707 fio-3.35 00:31:24.707 Starting 4 threads 00:31:26.085 00:31:26.086 job0: (groupid=0, jobs=1): err= 0: pid=2552340: Mon Nov 18 13:14:23 2024 00:31:26.086 read: IOPS=1313, BW=5254KiB/s (5380kB/s)(5280KiB/1005msec) 00:31:26.086 slat (nsec): min=2411, max=63837, avg=6717.36, stdev=3948.99 00:31:26.086 clat (usec): min=150, max=41069, avg=544.42, stdev=3531.30 00:31:26.086 lat (usec): min=153, max=41093, avg=551.14, stdev=3532.63 00:31:26.086 clat percentiles (usec): 00:31:26.086 | 1.00th=[ 163], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 194], 00:31:26.086 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 229], 60.00th=[ 265], 00:31:26.086 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 289], 95.00th=[ 293], 00:31:26.086 | 99.00th=[ 367], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:26.086 | 99.99th=[41157] 00:31:26.086 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:31:26.086 slat (nsec): min=3582, max=72488, avg=7505.80, stdev=4674.16 00:31:26.086 clat (usec): min=110, max=333, avg=168.74, stdev=47.19 00:31:26.086 lat (usec): min=114, max=369, avg=176.25, stdev=50.59 00:31:26.086 clat percentiles (usec): 00:31:26.086 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 127], 20.00th=[ 133], 00:31:26.086 | 30.00th=[ 137], 40.00th=[ 143], 50.00th=[ 149], 60.00th=[ 159], 00:31:26.086 | 70.00th=[ 174], 80.00th=[ 235], 90.00th=[ 241], 95.00th=[ 260], 00:31:26.086 | 99.00th=[ 297], 99.50th=[ 322], 99.90th=[ 330], 99.95th=[ 334], 00:31:26.086 | 99.99th=[ 334] 00:31:26.086 bw ( KiB/s): min= 1136, max=11152, per=38.06%, avg=6144.00, stdev=7082.38, samples=2 00:31:26.086 iops : min= 284, max= 2788, avg=1536.00, stdev=1770.60, samples=2 00:31:26.086 lat (usec) : 250=75.21%, 500=24.44% 00:31:26.086 lat (msec) : 50=0.35% 00:31:26.086 cpu : usr=1.99%, sys=2.79%, ctx=2858, majf=0, minf=1 00:31:26.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.086 issued rwts: total=1320,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.086 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:26.086 job1: (groupid=0, jobs=1): err= 0: pid=2552342: Mon Nov 18 13:14:23 2024 00:31:26.086 read: IOPS=1032, BW=4130KiB/s (4229kB/s)(4192KiB/1015msec) 00:31:26.086 slat (nsec): min=2533, max=25783, avg=8055.75, stdev=3313.75 00:31:26.086 clat (usec): min=153, max=40983, avg=683.74, stdev=4246.84 00:31:26.086 lat (usec): min=156, max=41008, avg=691.80, stdev=4247.47 00:31:26.086 clat percentiles (usec): 00:31:26.086 | 1.00th=[ 159], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 217], 00:31:26.086 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 235], 00:31:26.086 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 251], 95.00th=[ 255], 00:31:26.086 | 99.00th=[40633], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:31:26.086 | 99.99th=[41157] 00:31:26.086 write: IOPS=1513, BW=6053KiB/s (6198kB/s)(6144KiB/1015msec); 0 zone resets 00:31:26.086 slat (nsec): min=10498, max=42906, avg=12507.23, stdev=2087.25 00:31:26.086 clat (usec): min=136, max=283, avg=171.39, stdev=21.17 00:31:26.086 lat (usec): min=147, max=296, avg=183.89, stdev=21.32 00:31:26.086 clat percentiles (usec): 00:31:26.086 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:31:26.086 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:31:26.086 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 200], 95.00th=[ 219], 00:31:26.086 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 265], 99.95th=[ 285], 00:31:26.086 | 99.99th=[ 285] 00:31:26.086 bw ( KiB/s): min= 1344, max=10944, per=38.06%, avg=6144.00, stdev=6788.23, samples=2 00:31:26.086 iops : min= 336, max= 2736, avg=1536.00, stdev=1697.06, samples=2 00:31:26.086 lat (usec) : 250=95.51%, 500=3.91%, 750=0.12% 00:31:26.086 lat (msec) : 50=0.46% 00:31:26.086 cpu : usr=1.48%, sys=4.73%, ctx=2585, majf=0, minf=1 00:31:26.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.086 issued rwts: total=1048,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.086 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:26.086 job2: (groupid=0, jobs=1): err= 0: pid=2552343: Mon Nov 18 13:14:23 2024 00:31:26.086 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:31:26.086 slat (nsec): min=9932, max=24837, avg=21870.64, stdev=3395.87 00:31:26.086 clat (usec): min=40839, max=41084, avg=40963.24, stdev=61.38 00:31:26.086 lat (usec): min=40861, max=41106, avg=40985.11, stdev=61.79 00:31:26.086 clat percentiles (usec): 00:31:26.086 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:26.086 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:26.086 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:26.086 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:26.086 | 99.99th=[41157] 00:31:26.086 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:31:26.086 slat (nsec): min=10616, max=41599, avg=12217.46, stdev=2415.73 00:31:26.086 clat (usec): min=151, max=275, avg=179.63, stdev=14.01 00:31:26.086 lat (usec): min=163, max=317, avg=191.85, stdev=14.74 00:31:26.086 clat percentiles (usec): 00:31:26.086 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:31:26.086 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:31:26.086 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 206], 00:31:26.086 | 99.00th=[ 223], 99.50th=[ 229], 99.90th=[ 277], 99.95th=[ 277], 00:31:26.086 | 99.99th=[ 277] 00:31:26.086 bw ( KiB/s): min= 4096, max= 4096, per=25.38%, avg=4096.00, stdev= 0.00, samples=1 00:31:26.086 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:26.086 lat (usec) : 250=95.69%, 500=0.19% 00:31:26.086 lat (msec) : 50=4.12% 00:31:26.086 cpu : usr=0.50%, sys=0.90%, ctx=535, majf=0, minf=1 00:31:26.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.086 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.086 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:26.086 job3: (groupid=0, jobs=1): err= 0: pid=2552344: Mon Nov 18 13:14:23 2024 00:31:26.086 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:31:26.086 slat (nsec): min=9734, max=29668, avg=22431.45, stdev=3398.84 00:31:26.086 clat (usec): min=40543, max=41983, avg=41043.51, stdev=320.34 00:31:26.086 lat (usec): min=40553, max=42013, avg=41065.94, stdev=322.37 00:31:26.086 clat percentiles (usec): 00:31:26.086 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:26.086 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:26.086 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:26.086 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:26.086 | 99.99th=[42206] 00:31:26.086 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:31:26.086 slat (nsec): min=9984, max=40880, avg=11214.11, stdev=1928.38 00:31:26.086 clat (usec): min=150, max=282, avg=197.29, stdev=23.63 00:31:26.086 lat (usec): min=161, max=323, avg=208.50, stdev=24.02 00:31:26.086 clat percentiles (usec): 00:31:26.086 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:31:26.086 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 196], 00:31:26.086 | 70.00th=[ 204], 80.00th=[ 217], 90.00th=[ 239], 95.00th=[ 243], 00:31:26.086 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 281], 99.95th=[ 281], 00:31:26.086 | 99.99th=[ 281] 00:31:26.086 bw ( KiB/s): min= 4096, max= 4096, per=25.38%, avg=4096.00, stdev= 0.00, samples=1 00:31:26.086 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:26.086 lat (usec) : 250=94.76%, 500=1.12% 00:31:26.086 lat (msec) : 50=4.12% 00:31:26.086 cpu : usr=0.49%, sys=0.89%, ctx=534, majf=0, minf=2 00:31:26.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.086 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.086 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:26.086 00:31:26.086 Run status group 0 (all jobs): 00:31:26.086 READ: bw=9505KiB/s (9734kB/s), 87.0KiB/s-5254KiB/s (89.0kB/s-5380kB/s), io=9648KiB (9880kB), run=1002-1015msec 00:31:26.086 WRITE: bw=15.8MiB/s (16.5MB/s), 2024KiB/s-6113KiB/s (2072kB/s-6260kB/s), io=16.0MiB (16.8MB), run=1002-1015msec 00:31:26.086 00:31:26.086 Disk stats (read/write): 00:31:26.086 nvme0n1: ios=1369/1536, merge=0/0, ticks=1136/248, in_queue=1384, util=98.10% 00:31:26.086 nvme0n2: ios=1089/1536, merge=0/0, ticks=683/249, in_queue=932, util=98.27% 00:31:26.086 nvme0n3: ios=42/512, merge=0/0, ticks=1724/86, in_queue=1810, util=98.44% 00:31:26.086 nvme0n4: ios=23/512, merge=0/0, ticks=945/96, in_queue=1041, util=90.66% 00:31:26.086 13:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:26.086 [global] 00:31:26.086 thread=1 00:31:26.086 invalidate=1 00:31:26.086 rw=write 00:31:26.086 time_based=1 00:31:26.086 runtime=1 00:31:26.086 ioengine=libaio 00:31:26.086 direct=1 00:31:26.086 bs=4096 00:31:26.086 iodepth=128 00:31:26.086 norandommap=0 00:31:26.086 numjobs=1 00:31:26.086 00:31:26.086 verify_dump=1 00:31:26.086 verify_backlog=512 00:31:26.086 verify_state_save=0 00:31:26.086 do_verify=1 00:31:26.086 verify=crc32c-intel 00:31:26.086 [job0] 00:31:26.086 filename=/dev/nvme0n1 00:31:26.086 [job1] 00:31:26.086 filename=/dev/nvme0n2 00:31:26.086 [job2] 00:31:26.086 filename=/dev/nvme0n3 00:31:26.086 [job3] 00:31:26.086 filename=/dev/nvme0n4 00:31:26.086 Could not set queue depth (nvme0n1) 00:31:26.086 Could not set queue depth (nvme0n2) 00:31:26.086 Could not set queue depth (nvme0n3) 00:31:26.086 Could not set queue depth (nvme0n4) 00:31:26.345 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:26.345 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:26.345 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:26.345 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:26.345 fio-3.35 00:31:26.345 Starting 4 threads 00:31:27.750 00:31:27.750 job0: (groupid=0, jobs=1): err= 0: pid=2552709: Mon Nov 18 13:14:25 2024 00:31:27.750 read: IOPS=3129, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1009msec) 00:31:27.750 slat (nsec): min=1532, max=15209k, avg=135594.86, stdev=1004477.65 00:31:27.750 clat (usec): min=6233, max=46306, avg=16921.30, stdev=8358.77 00:31:27.750 lat (usec): min=6239, max=46330, avg=17056.89, stdev=8438.11 00:31:27.750 clat percentiles (usec): 00:31:27.750 | 1.00th=[ 6259], 5.00th=[ 7767], 10.00th=[ 9372], 20.00th=[10814], 00:31:27.750 | 30.00th=[11207], 40.00th=[11731], 50.00th=[13304], 60.00th=[15926], 00:31:27.750 | 70.00th=[19530], 80.00th=[23725], 90.00th=[29230], 95.00th=[34866], 00:31:27.750 | 99.00th=[38011], 99.50th=[38011], 99.90th=[40633], 99.95th=[45876], 00:31:27.750 | 99.99th=[46400] 00:31:27.750 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:31:27.750 slat (usec): min=2, max=51331, avg=130.09, stdev=1138.17 00:31:27.750 clat (msec): min=2, max=102, avg=20.78, stdev=21.68 00:31:27.750 lat (msec): min=2, max=102, avg=20.91, stdev=21.79 00:31:27.750 clat percentiles (msec): 00:31:27.750 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:31:27.750 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:31:27.750 | 70.00th=[ 16], 80.00th=[ 33], 90.00th=[ 59], 95.00th=[ 72], 00:31:27.750 | 99.00th=[ 97], 99.50th=[ 101], 99.90th=[ 103], 99.95th=[ 103], 00:31:27.750 | 99.99th=[ 104] 00:31:27.750 bw ( KiB/s): min=11960, max=16384, per=21.27%, avg=14172.00, stdev=3128.24, samples=2 00:31:27.750 iops : min= 2990, max= 4096, avg=3543.00, stdev=782.06, samples=2 00:31:27.750 lat (msec) : 4=1.35%, 10=24.87%, 20=46.20%, 50=21.40%, 100=5.86% 00:31:27.750 lat (msec) : 250=0.31% 00:31:27.750 cpu : usr=3.27%, sys=3.57%, ctx=220, majf=0, minf=1 00:31:27.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:27.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:27.750 issued rwts: total=3158,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:27.750 job1: (groupid=0, jobs=1): err= 0: pid=2552710: Mon Nov 18 13:14:25 2024 00:31:27.750 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:31:27.750 slat (nsec): min=1037, max=52476k, avg=151982.57, stdev=1521608.81 00:31:27.750 clat (usec): min=1182, max=112488, avg=20728.88, stdev=17005.67 00:31:27.750 lat (usec): min=1190, max=112495, avg=20880.86, stdev=17138.50 00:31:27.750 clat percentiles (msec): 00:31:27.750 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 9], 20.00th=[ 11], 00:31:27.750 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:31:27.750 | 70.00th=[ 19], 80.00th=[ 28], 90.00th=[ 44], 95.00th=[ 56], 00:31:27.750 | 99.00th=[ 112], 99.50th=[ 112], 99.90th=[ 113], 99.95th=[ 113], 00:31:27.750 | 99.99th=[ 113] 00:31:27.750 write: IOPS=3383, BW=13.2MiB/s (13.9MB/s)(13.3MiB/1005msec); 0 zone resets 00:31:27.750 slat (usec): min=2, max=16787, avg=113.32, stdev=948.04 00:31:27.750 clat (usec): min=801, max=112527, avg=18759.74, stdev=17361.39 00:31:27.750 lat (usec): min=813, max=112541, avg=18873.06, stdev=17423.57 00:31:27.750 clat percentiles (msec): 00:31:27.750 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 9], 00:31:27.750 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 17], 00:31:27.750 | 70.00th=[ 20], 80.00th=[ 26], 90.00th=[ 38], 95.00th=[ 49], 00:31:27.750 | 99.00th=[ 107], 99.50th=[ 113], 99.90th=[ 113], 99.95th=[ 113], 00:31:27.750 | 99.99th=[ 113] 00:31:27.750 bw ( KiB/s): min= 6400, max=19776, per=19.65%, avg=13088.00, stdev=9458.26, samples=2 00:31:27.750 iops : min= 1600, max= 4944, avg=3272.00, stdev=2364.57, samples=2 00:31:27.750 lat (usec) : 1000=0.11% 00:31:27.750 lat (msec) : 2=0.53%, 4=2.72%, 10=20.26%, 20=48.96%, 50=21.69% 00:31:27.750 lat (msec) : 100=4.28%, 250=1.45% 00:31:27.750 cpu : usr=2.89%, sys=3.88%, ctx=204, majf=0, minf=1 00:31:27.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:31:27.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:27.750 issued rwts: total=3072,3400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:27.750 job2: (groupid=0, jobs=1): err= 0: pid=2552711: Mon Nov 18 13:14:25 2024 00:31:27.750 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:31:27.750 slat (nsec): min=1061, max=15347k, avg=98192.04, stdev=711485.82 00:31:27.750 clat (usec): min=5070, max=34780, avg=13184.13, stdev=4962.54 00:31:27.750 lat (usec): min=5075, max=34782, avg=13282.32, stdev=5000.76 00:31:27.750 clat percentiles (usec): 00:31:27.750 | 1.00th=[ 5080], 5.00th=[ 7242], 10.00th=[ 7832], 20.00th=[ 9503], 00:31:27.750 | 30.00th=[10683], 40.00th=[11731], 50.00th=[12256], 60.00th=[12780], 00:31:27.750 | 70.00th=[13698], 80.00th=[15795], 90.00th=[20055], 95.00th=[24773], 00:31:27.750 | 99.00th=[28181], 99.50th=[32113], 99.90th=[34866], 99.95th=[34866], 00:31:27.750 | 99.99th=[34866] 00:31:27.750 write: IOPS=5275, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1006msec); 0 zone resets 00:31:27.750 slat (nsec): min=1884, max=12294k, avg=80769.11, stdev=598067.79 00:31:27.750 clat (usec): min=1086, max=63081, avg=11332.17, stdev=5764.88 00:31:27.750 lat (usec): min=1097, max=63086, avg=11412.94, stdev=5787.70 00:31:27.750 clat percentiles (usec): 00:31:27.750 | 1.00th=[ 2376], 5.00th=[ 4817], 10.00th=[ 5473], 20.00th=[ 8160], 00:31:27.750 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[11338], 60.00th=[11863], 00:31:27.750 | 70.00th=[12387], 80.00th=[13566], 90.00th=[14746], 95.00th=[17695], 00:31:27.750 | 99.00th=[29230], 99.50th=[54264], 99.90th=[63177], 99.95th=[63177], 00:31:27.750 | 99.99th=[63177] 00:31:27.750 bw ( KiB/s): min=20480, max=21264, per=31.33%, avg=20872.00, stdev=554.37, samples=2 00:31:27.750 iops : min= 5120, max= 5316, avg=5218.00, stdev=138.59, samples=2 00:31:27.750 lat (msec) : 2=0.36%, 4=0.74%, 10=32.33%, 20=60.12%, 50=6.05% 00:31:27.750 lat (msec) : 100=0.39% 00:31:27.750 cpu : usr=3.88%, sys=5.47%, ctx=414, majf=0, minf=2 00:31:27.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:27.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:27.750 issued rwts: total=5120,5307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:27.750 job3: (groupid=0, jobs=1): err= 0: pid=2552712: Mon Nov 18 13:14:25 2024 00:31:27.750 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:31:27.750 slat (nsec): min=1074, max=9820.7k, avg=99831.84, stdev=701360.58 00:31:27.750 clat (usec): min=5071, max=41103, avg=12983.74, stdev=4087.49 00:31:27.750 lat (usec): min=5078, max=41107, avg=13083.57, stdev=4150.76 00:31:27.750 clat percentiles (usec): 00:31:27.750 | 1.00th=[ 6587], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10290], 00:31:27.750 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11994], 60.00th=[12649], 00:31:27.750 | 70.00th=[13829], 80.00th=[15008], 90.00th=[17695], 95.00th=[20055], 00:31:27.750 | 99.00th=[30016], 99.50th=[34341], 99.90th=[38536], 99.95th=[41157], 00:31:27.750 | 99.99th=[41157] 00:31:27.750 write: IOPS=4491, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1005msec); 0 zone resets 00:31:27.750 slat (nsec): min=1967, max=10599k, avg=125242.62, stdev=772715.81 00:31:27.750 clat (usec): min=419, max=90106, avg=16453.57, stdev=14673.19 00:31:27.750 lat (usec): min=494, max=90115, avg=16578.82, stdev=14778.59 00:31:27.750 clat percentiles (usec): 00:31:27.750 | 1.00th=[ 4228], 5.00th=[ 7439], 10.00th=[ 8979], 20.00th=[10159], 00:31:27.750 | 30.00th=[11207], 40.00th=[11863], 50.00th=[11994], 60.00th=[12649], 00:31:27.750 | 70.00th=[14091], 80.00th=[14877], 90.00th=[33424], 95.00th=[51643], 00:31:27.750 | 99.00th=[84411], 99.50th=[88605], 99.90th=[89654], 99.95th=[89654], 00:31:27.750 | 99.99th=[89654] 00:31:27.751 bw ( KiB/s): min=14616, max=20480, per=26.34%, avg=17548.00, stdev=4146.47, samples=2 00:31:27.751 iops : min= 3654, max= 5120, avg=4387.00, stdev=1036.62, samples=2 00:31:27.751 lat (usec) : 500=0.01% 00:31:27.751 lat (msec) : 2=0.22%, 4=0.28%, 10=17.18%, 20=73.75%, 50=5.59% 00:31:27.751 lat (msec) : 100=2.97% 00:31:27.751 cpu : usr=3.29%, sys=3.98%, ctx=314, majf=0, minf=2 00:31:27.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:27.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:27.751 issued rwts: total=4096,4514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:27.751 00:31:27.751 Run status group 0 (all jobs): 00:31:27.751 READ: bw=59.8MiB/s (62.7MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.8MB/s), io=60.3MiB (63.3MB), run=1005-1009msec 00:31:27.751 WRITE: bw=65.1MiB/s (68.2MB/s), 13.2MiB/s-20.6MiB/s (13.9MB/s-21.6MB/s), io=65.6MiB (68.8MB), run=1005-1009msec 00:31:27.751 00:31:27.751 Disk stats (read/write): 00:31:27.751 nvme0n1: ios=3094/3303, merge=0/0, ticks=35121/36758, in_queue=71879, util=96.19% 00:31:27.751 nvme0n2: ios=2921/3072, merge=0/0, ticks=32703/28443, in_queue=61146, util=96.35% 00:31:27.751 nvme0n3: ios=4400/4608, merge=0/0, ticks=26360/28387, in_queue=54747, util=87.63% 00:31:27.751 nvme0n4: ios=3188/3584, merge=0/0, ticks=25502/47077, in_queue=72579, util=88.69% 00:31:27.751 13:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:27.751 [global] 00:31:27.751 thread=1 00:31:27.751 invalidate=1 00:31:27.751 rw=randwrite 00:31:27.751 time_based=1 00:31:27.751 runtime=1 00:31:27.751 ioengine=libaio 00:31:27.751 direct=1 00:31:27.751 bs=4096 00:31:27.751 iodepth=128 00:31:27.751 norandommap=0 00:31:27.751 numjobs=1 00:31:27.751 00:31:27.751 verify_dump=1 00:31:27.751 verify_backlog=512 00:31:27.751 verify_state_save=0 00:31:27.751 do_verify=1 00:31:27.751 verify=crc32c-intel 00:31:27.751 [job0] 00:31:27.751 filename=/dev/nvme0n1 00:31:27.751 [job1] 00:31:27.751 filename=/dev/nvme0n2 00:31:27.751 [job2] 00:31:27.751 filename=/dev/nvme0n3 00:31:27.751 [job3] 00:31:27.751 filename=/dev/nvme0n4 00:31:27.751 Could not set queue depth (nvme0n1) 00:31:27.751 Could not set queue depth (nvme0n2) 00:31:27.751 Could not set queue depth (nvme0n3) 00:31:27.751 Could not set queue depth (nvme0n4) 00:31:28.012 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:28.012 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:28.012 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:28.012 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:28.012 fio-3.35 00:31:28.012 Starting 4 threads 00:31:29.383 00:31:29.383 job0: (groupid=0, jobs=1): err= 0: pid=2553085: Mon Nov 18 13:14:26 2024 00:31:29.383 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:31:29.383 slat (nsec): min=1736, max=23189k, avg=106837.68, stdev=763455.90 00:31:29.383 clat (usec): min=6843, max=55310, avg=13759.82, stdev=7719.45 00:31:29.383 lat (usec): min=6854, max=55319, avg=13866.66, stdev=7788.08 00:31:29.383 clat percentiles (usec): 00:31:29.383 | 1.00th=[ 7701], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9765], 00:31:29.383 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11863], 60.00th=[12911], 00:31:29.383 | 70.00th=[13566], 80.00th=[14353], 90.00th=[17171], 95.00th=[32637], 00:31:29.383 | 99.00th=[46924], 99.50th=[53216], 99.90th=[55313], 99.95th=[55313], 00:31:29.383 | 99.99th=[55313] 00:31:29.383 write: IOPS=3446, BW=13.5MiB/s (14.1MB/s)(13.6MiB/1007msec); 0 zone resets 00:31:29.383 slat (usec): min=2, max=13819, avg=187.30, stdev=1007.43 00:31:29.383 clat (msec): min=5, max=113, avg=24.40, stdev=23.44 00:31:29.383 lat (msec): min=5, max=113, avg=24.59, stdev=23.58 00:31:29.383 clat percentiles (msec): 00:31:29.383 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:31:29.383 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 14], 60.00th=[ 20], 00:31:29.383 | 70.00th=[ 22], 80.00th=[ 31], 90.00th=[ 61], 95.00th=[ 85], 00:31:29.383 | 99.00th=[ 112], 99.50th=[ 113], 99.90th=[ 114], 99.95th=[ 114], 00:31:29.383 | 99.99th=[ 114] 00:31:29.383 bw ( KiB/s): min= 8584, max=18168, per=18.83%, avg=13376.00, stdev=6776.91, samples=2 00:31:29.383 iops : min= 2146, max= 4542, avg=3344.00, stdev=1694.23, samples=2 00:31:29.383 lat (msec) : 10=16.49%, 20=59.04%, 50=17.09%, 100=5.95%, 250=1.44% 00:31:29.383 cpu : usr=2.09%, sys=4.97%, ctx=354, majf=0, minf=1 00:31:29.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:31:29.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:29.383 issued rwts: total=3072,3471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:29.383 job1: (groupid=0, jobs=1): err= 0: pid=2553086: Mon Nov 18 13:14:26 2024 00:31:29.383 read: IOPS=4677, BW=18.3MiB/s (19.2MB/s)(18.3MiB/1002msec) 00:31:29.383 slat (nsec): min=1273, max=53039k, avg=109132.89, stdev=990100.90 00:31:29.383 clat (usec): min=1027, max=98299, avg=13502.46, stdev=11619.04 00:31:29.383 lat (usec): min=4739, max=98307, avg=13611.59, stdev=11685.09 00:31:29.383 clat percentiles (usec): 00:31:29.383 | 1.00th=[ 5211], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10159], 00:31:29.383 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:31:29.383 | 70.00th=[11469], 80.00th=[12125], 90.00th=[16450], 95.00th=[22676], 00:31:29.383 | 99.00th=[81265], 99.50th=[98042], 99.90th=[98042], 99.95th=[98042], 00:31:29.383 | 99.99th=[98042] 00:31:29.383 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:31:29.383 slat (nsec): min=1985, max=18777k, avg=90046.68, stdev=654140.73 00:31:29.383 clat (usec): min=1366, max=69014, avg=12468.75, stdev=9548.90 00:31:29.383 lat (usec): min=1387, max=69023, avg=12558.80, stdev=9592.35 00:31:29.383 clat percentiles (usec): 00:31:29.383 | 1.00th=[ 5735], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[ 9765], 00:31:29.383 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:31:29.383 | 70.00th=[10421], 80.00th=[10552], 90.00th=[11731], 95.00th=[30016], 00:31:29.383 | 99.00th=[60031], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:31:29.383 | 99.99th=[68682] 00:31:29.383 bw ( KiB/s): min=16384, max=24192, per=28.56%, avg=20288.00, stdev=5521.09, samples=2 00:31:29.383 iops : min= 4096, max= 6048, avg=5072.00, stdev=1380.27, samples=2 00:31:29.383 lat (msec) : 2=0.03%, 10=24.34%, 20=69.01%, 50=4.36%, 100=2.25% 00:31:29.383 cpu : usr=2.40%, sys=3.90%, ctx=415, majf=0, minf=1 00:31:29.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:29.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:29.383 issued rwts: total=4687,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:29.383 job2: (groupid=0, jobs=1): err= 0: pid=2553087: Mon Nov 18 13:14:26 2024 00:31:29.383 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:31:29.383 slat (nsec): min=1324, max=13320k, avg=123783.42, stdev=882603.81 00:31:29.383 clat (usec): min=4637, max=32122, avg=14893.58, stdev=4346.75 00:31:29.383 lat (usec): min=4647, max=32125, avg=15017.37, stdev=4406.87 00:31:29.383 clat percentiles (usec): 00:31:29.383 | 1.00th=[ 7898], 5.00th=[ 9372], 10.00th=[11600], 20.00th=[12387], 00:31:29.383 | 30.00th=[12518], 40.00th=[12649], 50.00th=[13042], 60.00th=[14222], 00:31:29.383 | 70.00th=[16450], 80.00th=[17957], 90.00th=[20317], 95.00th=[23987], 00:31:29.383 | 99.00th=[30540], 99.50th=[30540], 99.90th=[32113], 99.95th=[32113], 00:31:29.383 | 99.99th=[32113] 00:31:29.383 write: IOPS=4010, BW=15.7MiB/s (16.4MB/s)(15.7MiB/1005msec); 0 zone resets 00:31:29.383 slat (usec): min=2, max=10890, avg=125.73, stdev=658.33 00:31:29.383 clat (usec): min=3614, max=53497, avg=18395.60, stdev=10590.78 00:31:29.383 lat (usec): min=3639, max=53507, avg=18521.33, stdev=10664.08 00:31:29.383 clat percentiles (usec): 00:31:29.383 | 1.00th=[ 4752], 5.00th=[ 7635], 10.00th=[ 8848], 20.00th=[10028], 00:31:29.383 | 30.00th=[11863], 40.00th=[12649], 50.00th=[14222], 60.00th=[17695], 00:31:29.383 | 70.00th=[21365], 80.00th=[25297], 90.00th=[35914], 95.00th=[42206], 00:31:29.383 | 99.00th=[48497], 99.50th=[52167], 99.90th=[53216], 99.95th=[53740], 00:31:29.383 | 99.99th=[53740] 00:31:29.383 bw ( KiB/s): min=14848, max=16384, per=21.98%, avg=15616.00, stdev=1086.12, samples=2 00:31:29.383 iops : min= 3712, max= 4096, avg=3904.00, stdev=271.53, samples=2 00:31:29.383 lat (msec) : 4=0.16%, 10=13.25%, 20=63.30%, 50=22.90%, 100=0.39% 00:31:29.383 cpu : usr=3.39%, sys=4.28%, ctx=381, majf=0, minf=1 00:31:29.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:29.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:29.383 issued rwts: total=3584,4031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:29.383 job3: (groupid=0, jobs=1): err= 0: pid=2553088: Mon Nov 18 13:14:26 2024 00:31:29.383 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:31:29.383 slat (nsec): min=1392, max=8149.3k, avg=92559.77, stdev=566997.62 00:31:29.383 clat (usec): min=5284, max=25816, avg=12443.24, stdev=2429.97 00:31:29.383 lat (usec): min=5289, max=25841, avg=12535.80, stdev=2458.56 00:31:29.383 clat percentiles (usec): 00:31:29.383 | 1.00th=[ 7832], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[10814], 00:31:29.383 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[12649], 00:31:29.383 | 70.00th=[13042], 80.00th=[14091], 90.00th=[15533], 95.00th=[16188], 00:31:29.383 | 99.00th=[21627], 99.50th=[21627], 99.90th=[24249], 99.95th=[24249], 00:31:29.383 | 99.99th=[25822] 00:31:29.383 write: IOPS=5241, BW=20.5MiB/s (21.5MB/s)(20.6MiB/1004msec); 0 zone resets 00:31:29.383 slat (usec): min=2, max=16468, avg=92.76, stdev=571.80 00:31:29.383 clat (usec): min=416, max=28747, avg=11845.47, stdev=1932.05 00:31:29.383 lat (usec): min=3411, max=28760, avg=11938.23, stdev=1973.21 00:31:29.383 clat percentiles (usec): 00:31:29.383 | 1.00th=[ 6587], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11338], 00:31:29.383 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11994], 00:31:29.383 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12649], 95.00th=[14746], 00:31:29.383 | 99.00th=[23200], 99.50th=[23462], 99.90th=[23462], 99.95th=[23462], 00:31:29.383 | 99.99th=[28705] 00:31:29.383 bw ( KiB/s): min=20480, max=20592, per=28.91%, avg=20536.00, stdev=79.20, samples=2 00:31:29.383 iops : min= 5120, max= 5148, avg=5134.00, stdev=19.80, samples=2 00:31:29.383 lat (usec) : 500=0.01% 00:31:29.383 lat (msec) : 4=0.40%, 10=6.22%, 20=91.91%, 50=1.45% 00:31:29.383 cpu : usr=4.09%, sys=5.48%, ctx=482, majf=0, minf=1 00:31:29.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:29.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:29.383 issued rwts: total=5120,5262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:29.383 00:31:29.383 Run status group 0 (all jobs): 00:31:29.383 READ: bw=63.9MiB/s (67.0MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=64.3MiB (67.4MB), run=1002-1007msec 00:31:29.383 WRITE: bw=69.4MiB/s (72.7MB/s), 13.5MiB/s-20.5MiB/s (14.1MB/s-21.5MB/s), io=69.9MiB (73.3MB), run=1002-1007msec 00:31:29.383 00:31:29.383 Disk stats (read/write): 00:31:29.383 nvme0n1: ios=2813/3072, merge=0/0, ticks=19749/33234, in_queue=52983, util=99.60% 00:31:29.384 nvme0n2: ios=3988/4096, merge=0/0, ticks=14420/14445, in_queue=28865, util=86.79% 00:31:29.384 nvme0n3: ios=3072/3271, merge=0/0, ticks=44427/59353, in_queue=103780, util=88.96% 00:31:29.384 nvme0n4: ios=4323/4608, merge=0/0, ticks=24888/22908, in_queue=47796, util=99.16% 00:31:29.384 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:29.384 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2553317 00:31:29.384 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:29.384 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:29.384 [global] 00:31:29.384 thread=1 00:31:29.384 invalidate=1 00:31:29.384 rw=read 00:31:29.384 time_based=1 00:31:29.384 runtime=10 00:31:29.384 ioengine=libaio 00:31:29.384 direct=1 00:31:29.384 bs=4096 00:31:29.384 iodepth=1 00:31:29.384 norandommap=1 00:31:29.384 numjobs=1 00:31:29.384 00:31:29.384 [job0] 00:31:29.384 filename=/dev/nvme0n1 00:31:29.384 [job1] 00:31:29.384 filename=/dev/nvme0n2 00:31:29.384 [job2] 00:31:29.384 filename=/dev/nvme0n3 00:31:29.384 [job3] 00:31:29.384 filename=/dev/nvme0n4 00:31:29.384 Could not set queue depth (nvme0n1) 00:31:29.384 Could not set queue depth (nvme0n2) 00:31:29.384 Could not set queue depth (nvme0n3) 00:31:29.384 Could not set queue depth (nvme0n4) 00:31:29.384 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:29.384 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:29.384 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:29.384 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:29.384 fio-3.35 00:31:29.384 Starting 4 threads 00:31:32.659 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:32.659 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=35287040, buflen=4096 00:31:32.659 fio: pid=2553461, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:32.659 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:32.659 13:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:32.659 13:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:32.659 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:31:32.659 fio: pid=2553456, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:32.659 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=57446400, buflen=4096 00:31:32.659 fio: pid=2553453, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:32.659 13:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:32.659 13:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:32.917 13:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:32.917 13:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:32.917 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=630784, buflen=4096 00:31:32.917 fio: pid=2553454, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:33.173 00:31:33.174 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2553453: Mon Nov 18 13:14:30 2024 00:31:33.174 read: IOPS=4547, BW=17.8MiB/s (18.6MB/s)(54.8MiB/3084msec) 00:31:33.174 slat (usec): min=4, max=12568, avg=10.38, stdev=176.32 00:31:33.174 clat (usec): min=166, max=2568, avg=206.62, stdev=25.12 00:31:33.174 lat (usec): min=173, max=12953, avg=217.01, stdev=181.96 00:31:33.174 clat percentiles (usec): 00:31:33.174 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 194], 00:31:33.174 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:31:33.174 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 229], 00:31:33.174 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[ 351], 99.95th=[ 424], 00:31:33.174 | 99.99th=[ 486] 00:31:33.174 bw ( KiB/s): min=17848, max=18648, per=66.78%, avg=18371.17, stdev=336.38, samples=6 00:31:33.174 iops : min= 4462, max= 4662, avg=4592.67, stdev=84.23, samples=6 00:31:33.174 lat (usec) : 250=99.12%, 500=0.87% 00:31:33.174 lat (msec) : 4=0.01% 00:31:33.174 cpu : usr=1.27%, sys=4.02%, ctx=14032, majf=0, minf=2 00:31:33.174 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.174 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.174 issued rwts: total=14026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.174 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:33.174 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2553454: Mon Nov 18 13:14:30 2024 00:31:33.174 read: IOPS=46, BW=185KiB/s (190kB/s)(616KiB/3325msec) 00:31:33.174 slat (usec): min=8, max=18864, avg=297.06, stdev=2053.24 00:31:33.174 clat (usec): min=192, max=42397, avg=21145.61, stdev=20384.29 00:31:33.174 lat (usec): min=201, max=60077, avg=21444.44, stdev=20761.97 00:31:33.174 clat percentiles (usec): 00:31:33.174 | 1.00th=[ 198], 5.00th=[ 212], 10.00th=[ 227], 20.00th=[ 258], 00:31:33.174 | 30.00th=[ 285], 40.00th=[ 322], 50.00th=[40633], 60.00th=[40633], 00:31:33.174 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:33.174 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:33.174 | 99.99th=[42206] 00:31:33.174 bw ( KiB/s): min= 128, max= 232, per=0.69%, avg=190.00, stdev=36.81, samples=6 00:31:33.174 iops : min= 32, max= 58, avg=47.50, stdev= 9.20, samples=6 00:31:33.174 lat (usec) : 250=16.13%, 500=30.97%, 1000=0.65% 00:31:33.174 lat (msec) : 2=0.65%, 50=50.97% 00:31:33.174 cpu : usr=0.18%, sys=0.00%, ctx=159, majf=0, minf=1 00:31:33.174 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.174 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.174 issued rwts: total=155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.174 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:33.174 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2553456: Mon Nov 18 13:14:30 2024 00:31:33.174 read: IOPS=25, BW=99.4KiB/s (102kB/s)(288KiB/2898msec) 00:31:33.174 slat (nsec): min=9743, max=33045, avg=14081.51, stdev=5003.02 00:31:33.174 clat (usec): min=296, max=42077, avg=39932.14, stdev=6736.96 00:31:33.174 lat (usec): min=321, max=42087, avg=39946.10, stdev=6734.40 00:31:33.174 clat percentiles (usec): 00:31:33.174 | 1.00th=[ 297], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:33.174 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:33.174 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:33.174 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:33.174 | 99.99th=[42206] 00:31:33.174 bw ( KiB/s): min= 96, max= 112, per=0.36%, avg=99.20, stdev= 7.16, samples=5 00:31:33.174 iops : min= 24, max= 28, avg=24.80, stdev= 1.79, samples=5 00:31:33.174 lat (usec) : 500=2.74% 00:31:33.174 lat (msec) : 50=95.89% 00:31:33.174 cpu : usr=0.00%, sys=0.03%, ctx=73, majf=0, minf=2 00:31:33.174 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.174 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.174 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.174 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:33.174 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2553461: Mon Nov 18 13:14:30 2024 00:31:33.174 read: IOPS=3224, BW=12.6MiB/s (13.2MB/s)(33.7MiB/2672msec) 00:31:33.174 slat (nsec): min=7025, max=44229, avg=8242.21, stdev=1627.07 00:31:33.174 clat (usec): min=212, max=536, avg=297.46, stdev=30.64 00:31:33.174 lat (usec): min=227, max=559, avg=305.70, stdev=30.68 00:31:33.174 clat percentiles (usec): 00:31:33.174 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 293], 00:31:33.174 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 297], 60.00th=[ 302], 00:31:33.174 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 318], 00:31:33.174 | 99.00th=[ 457], 99.50th=[ 465], 99.90th=[ 478], 99.95th=[ 478], 00:31:33.174 | 99.99th=[ 537] 00:31:33.174 bw ( KiB/s): min=12864, max=13616, per=47.36%, avg=13027.20, stdev=329.39, samples=5 00:31:33.174 iops : min= 3216, max= 3404, avg=3256.80, stdev=82.35, samples=5 00:31:33.174 lat (usec) : 250=8.23%, 500=91.72%, 750=0.03% 00:31:33.174 cpu : usr=1.46%, sys=5.58%, ctx=8616, majf=0, minf=2 00:31:33.174 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.174 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.174 issued rwts: total=8616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.174 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:33.174 00:31:33.174 Run status group 0 (all jobs): 00:31:33.174 READ: bw=26.9MiB/s (28.2MB/s), 99.4KiB/s-17.8MiB/s (102kB/s-18.6MB/s), io=89.3MiB (93.7MB), run=2672-3325msec 00:31:33.174 00:31:33.174 Disk stats (read/write): 00:31:33.174 nvme0n1: ios=14009/0, merge=0/0, ticks=2794/0, in_queue=2794, util=92.94% 00:31:33.174 nvme0n2: ios=190/0, merge=0/0, ticks=4001/0, in_queue=4001, util=98.04% 00:31:33.174 nvme0n3: ios=116/0, merge=0/0, ticks=3104/0, in_queue=3104, util=99.28% 00:31:33.174 nvme0n4: ios=8309/0, merge=0/0, ticks=2376/0, in_queue=2376, util=96.38% 00:31:33.174 13:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:33.174 13:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:33.431 13:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:33.431 13:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:33.689 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:33.689 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:33.947 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:33.947 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:33.947 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:33.947 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2553317 00:31:33.947 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:33.947 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:34.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:34.204 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:34.204 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:31:34.204 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:31:34.204 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:34.204 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:31:34.204 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:34.204 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:31:34.204 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:34.204 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:34.205 nvmf hotplug test: fio failed as expected 00:31:34.205 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:34.462 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:34.462 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:34.462 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:34.463 rmmod nvme_tcp 00:31:34.463 rmmod nvme_fabrics 00:31:34.463 rmmod nvme_keyring 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2550624 ']' 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2550624 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2550624 ']' 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2550624 00:31:34.463 13:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:31:34.463 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:34.463 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2550624 00:31:34.463 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:34.463 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:34.463 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2550624' 00:31:34.463 killing process with pid 2550624 00:31:34.463 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2550624 00:31:34.463 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2550624 00:31:34.722 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:34.722 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:34.722 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:34.722 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:34.722 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:34.722 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:34.722 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:34.722 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:34.722 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:34.722 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.722 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:34.722 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.629 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:36.629 00:31:36.629 real 0m26.415s 00:31:36.629 user 1m30.572s 00:31:36.629 sys 0m11.429s 00:31:36.629 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:36.629 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:36.629 ************************************ 00:31:36.629 END TEST nvmf_fio_target 00:31:36.629 ************************************ 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:36.888 ************************************ 00:31:36.888 START TEST nvmf_bdevio 00:31:36.888 ************************************ 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:36.888 * Looking for test storage... 00:31:36.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:36.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.888 --rc genhtml_branch_coverage=1 00:31:36.888 --rc genhtml_function_coverage=1 00:31:36.888 --rc genhtml_legend=1 00:31:36.888 --rc geninfo_all_blocks=1 00:31:36.888 --rc geninfo_unexecuted_blocks=1 00:31:36.888 00:31:36.888 ' 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:36.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.888 --rc genhtml_branch_coverage=1 00:31:36.888 --rc genhtml_function_coverage=1 00:31:36.888 --rc genhtml_legend=1 00:31:36.888 --rc geninfo_all_blocks=1 00:31:36.888 --rc geninfo_unexecuted_blocks=1 00:31:36.888 00:31:36.888 ' 00:31:36.888 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:36.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.889 --rc genhtml_branch_coverage=1 00:31:36.889 --rc genhtml_function_coverage=1 00:31:36.889 --rc genhtml_legend=1 00:31:36.889 --rc geninfo_all_blocks=1 00:31:36.889 --rc geninfo_unexecuted_blocks=1 00:31:36.889 00:31:36.889 ' 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:36.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.889 --rc genhtml_branch_coverage=1 00:31:36.889 --rc genhtml_function_coverage=1 00:31:36.889 --rc genhtml_legend=1 00:31:36.889 --rc geninfo_all_blocks=1 00:31:36.889 --rc geninfo_unexecuted_blocks=1 00:31:36.889 00:31:36.889 ' 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.889 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.148 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:37.148 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:37.149 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:37.149 13:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:43.722 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:43.722 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:43.722 Found net devices under 0000:86:00.0: cvl_0_0 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.722 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:43.723 Found net devices under 0000:86:00.1: cvl_0_1 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:43.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:31:43.723 00:31:43.723 --- 10.0.0.2 ping statistics --- 00:31:43.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.723 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:31:43.723 00:31:43.723 --- 10.0.0.1 ping statistics --- 00:31:43.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.723 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2557700 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2557700 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2557700 ']' 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:43.723 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:43.723 [2024-11-18 13:14:40.575016] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:43.723 [2024-11-18 13:14:40.575965] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:31:43.723 [2024-11-18 13:14:40.576000] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:43.723 [2024-11-18 13:14:40.658021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:43.723 [2024-11-18 13:14:40.700036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:43.723 [2024-11-18 13:14:40.700074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:43.723 [2024-11-18 13:14:40.700081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:43.723 [2024-11-18 13:14:40.700087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:43.723 [2024-11-18 13:14:40.700092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:43.723 [2024-11-18 13:14:40.701717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:43.723 [2024-11-18 13:14:40.701828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:43.723 [2024-11-18 13:14:40.701935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:43.723 [2024-11-18 13:14:40.701936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:43.723 [2024-11-18 13:14:40.768425] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:43.723 [2024-11-18 13:14:40.769300] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:43.723 [2024-11-18 13:14:40.769502] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:43.723 [2024-11-18 13:14:40.769894] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:43.723 [2024-11-18 13:14:40.769941] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:43.983 [2024-11-18 13:14:41.466690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:43.983 Malloc0 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:43.983 [2024-11-18 13:14:41.554874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.983 { 00:31:43.983 "params": { 00:31:43.983 "name": "Nvme$subsystem", 00:31:43.983 "trtype": "$TEST_TRANSPORT", 00:31:43.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.983 "adrfam": "ipv4", 00:31:43.983 "trsvcid": "$NVMF_PORT", 00:31:43.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.983 "hdgst": ${hdgst:-false}, 00:31:43.983 "ddgst": ${ddgst:-false} 00:31:43.983 }, 00:31:43.983 "method": "bdev_nvme_attach_controller" 00:31:43.983 } 00:31:43.983 EOF 00:31:43.983 )") 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:43.983 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:43.983 "params": { 00:31:43.983 "name": "Nvme1", 00:31:43.983 "trtype": "tcp", 00:31:43.983 "traddr": "10.0.0.2", 00:31:43.983 "adrfam": "ipv4", 00:31:43.983 "trsvcid": "4420", 00:31:43.983 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:43.983 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:43.983 "hdgst": false, 00:31:43.983 "ddgst": false 00:31:43.983 }, 00:31:43.983 "method": "bdev_nvme_attach_controller" 00:31:43.983 }' 00:31:43.983 [2024-11-18 13:14:41.606038] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:31:43.983 [2024-11-18 13:14:41.606089] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2557946 ] 00:31:44.241 [2024-11-18 13:14:41.683238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:44.241 [2024-11-18 13:14:41.727447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.241 [2024-11-18 13:14:41.727557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.241 [2024-11-18 13:14:41.727558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:44.241 I/O targets: 00:31:44.241 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:44.241 00:31:44.241 00:31:44.241 CUnit - A unit testing framework for C - Version 2.1-3 00:31:44.241 http://cunit.sourceforge.net/ 00:31:44.241 00:31:44.241 00:31:44.241 Suite: bdevio tests on: Nvme1n1 00:31:44.499 Test: blockdev write read block ...passed 00:31:44.499 Test: blockdev write zeroes read block ...passed 00:31:44.499 Test: blockdev write zeroes read no split ...passed 00:31:44.499 Test: blockdev write zeroes read split ...passed 00:31:44.499 Test: blockdev write zeroes read split partial ...passed 00:31:44.499 Test: blockdev reset ...[2024-11-18 13:14:42.027337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:44.499 [2024-11-18 13:14:42.027404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeec340 (9): Bad file descriptor 00:31:44.499 [2024-11-18 13:14:42.079523] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:44.499 passed 00:31:44.499 Test: blockdev write read 8 blocks ...passed 00:31:44.499 Test: blockdev write read size > 128k ...passed 00:31:44.499 Test: blockdev write read invalid size ...passed 00:31:44.499 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:44.499 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:44.499 Test: blockdev write read max offset ...passed 00:31:44.757 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:44.757 Test: blockdev writev readv 8 blocks ...passed 00:31:44.757 Test: blockdev writev readv 30 x 1block ...passed 00:31:44.757 Test: blockdev writev readv block ...passed 00:31:44.757 Test: blockdev writev readv size > 128k ...passed 00:31:44.757 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:44.757 Test: blockdev comparev and writev ...[2024-11-18 13:14:42.370219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:44.757 [2024-11-18 13:14:42.370247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:44.757 [2024-11-18 13:14:42.370262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:44.757 [2024-11-18 13:14:42.370270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:44.757 [2024-11-18 13:14:42.370576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:44.757 [2024-11-18 13:14:42.370588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:44.757 [2024-11-18 13:14:42.370600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:44.757 [2024-11-18 13:14:42.370609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:44.757 [2024-11-18 13:14:42.370894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:44.757 [2024-11-18 13:14:42.370906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:44.757 [2024-11-18 13:14:42.370919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:44.757 [2024-11-18 13:14:42.370926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:44.757 [2024-11-18 13:14:42.371220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:44.757 [2024-11-18 13:14:42.371233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:44.757 [2024-11-18 13:14:42.371246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:44.757 [2024-11-18 13:14:42.371253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:44.757 passed 00:31:44.757 Test: blockdev nvme passthru rw ...passed 00:31:44.757 Test: blockdev nvme passthru vendor specific ...[2024-11-18 13:14:42.453671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:44.757 [2024-11-18 13:14:42.453695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:44.757 [2024-11-18 13:14:42.453811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:44.757 [2024-11-18 13:14:42.453822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:44.757 [2024-11-18 13:14:42.453937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:44.757 [2024-11-18 13:14:42.453947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:44.757 [2024-11-18 13:14:42.454063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:44.757 [2024-11-18 13:14:42.454073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:44.757 passed 00:31:45.016 Test: blockdev nvme admin passthru ...passed 00:31:45.016 Test: blockdev copy ...passed 00:31:45.016 00:31:45.016 Run Summary: Type Total Ran Passed Failed Inactive 00:31:45.016 suites 1 1 n/a 0 0 00:31:45.016 tests 23 23 23 0 0 00:31:45.016 asserts 152 152 152 0 n/a 00:31:45.016 00:31:45.016 Elapsed time = 1.183 seconds 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:45.016 rmmod nvme_tcp 00:31:45.016 rmmod nvme_fabrics 00:31:45.016 rmmod nvme_keyring 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2557700 ']' 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2557700 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2557700 ']' 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2557700 00:31:45.016 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2557700 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2557700' 00:31:45.276 killing process with pid 2557700 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2557700 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2557700 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.276 13:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.810 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:47.810 00:31:47.810 real 0m10.652s 00:31:47.810 user 0m9.051s 00:31:47.810 sys 0m5.264s 00:31:47.810 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:47.810 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:47.810 ************************************ 00:31:47.810 END TEST nvmf_bdevio 00:31:47.810 ************************************ 00:31:47.810 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:47.810 00:31:47.810 real 4m32.756s 00:31:47.810 user 9m11.064s 00:31:47.810 sys 1m52.368s 00:31:47.810 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:47.810 13:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:47.810 ************************************ 00:31:47.810 END TEST nvmf_target_core_interrupt_mode 00:31:47.810 ************************************ 00:31:47.810 13:14:45 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:47.810 13:14:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:47.810 13:14:45 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:47.810 13:14:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:47.810 ************************************ 00:31:47.810 START TEST nvmf_interrupt 00:31:47.810 ************************************ 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:47.810 * Looking for test storage... 00:31:47.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.810 --rc genhtml_branch_coverage=1 00:31:47.810 --rc genhtml_function_coverage=1 00:31:47.810 --rc genhtml_legend=1 00:31:47.810 --rc geninfo_all_blocks=1 00:31:47.810 --rc geninfo_unexecuted_blocks=1 00:31:47.810 00:31:47.810 ' 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.810 --rc genhtml_branch_coverage=1 00:31:47.810 --rc genhtml_function_coverage=1 00:31:47.810 --rc genhtml_legend=1 00:31:47.810 --rc geninfo_all_blocks=1 00:31:47.810 --rc geninfo_unexecuted_blocks=1 00:31:47.810 00:31:47.810 ' 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.810 --rc genhtml_branch_coverage=1 00:31:47.810 --rc genhtml_function_coverage=1 00:31:47.810 --rc genhtml_legend=1 00:31:47.810 --rc geninfo_all_blocks=1 00:31:47.810 --rc geninfo_unexecuted_blocks=1 00:31:47.810 00:31:47.810 ' 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.810 --rc genhtml_branch_coverage=1 00:31:47.810 --rc genhtml_function_coverage=1 00:31:47.810 --rc genhtml_legend=1 00:31:47.810 --rc geninfo_all_blocks=1 00:31:47.810 --rc geninfo_unexecuted_blocks=1 00:31:47.810 00:31:47.810 ' 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.810 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:47.811 13:14:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.382 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:54.382 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:54.383 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:54.383 Found net devices under 0000:86:00.0: cvl_0_0 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:54.383 Found net devices under 0000:86:00.1: cvl_0_1 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.383 13:14:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:54.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:31:54.383 00:31:54.383 --- 10.0.0.2 ping statistics --- 00:31:54.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.383 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:31:54.383 00:31:54.383 --- 10.0.0.1 ping statistics --- 00:31:54.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.383 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2561654 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2561654 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 2561654 ']' 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:54.383 [2024-11-18 13:14:51.281170] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:54.383 [2024-11-18 13:14:51.282170] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:31:54.383 [2024-11-18 13:14:51.282216] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.383 [2024-11-18 13:14:51.361580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:54.383 [2024-11-18 13:14:51.403865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.383 [2024-11-18 13:14:51.403904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.383 [2024-11-18 13:14:51.403911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.383 [2024-11-18 13:14:51.403917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.383 [2024-11-18 13:14:51.403922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.383 [2024-11-18 13:14:51.405138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.383 [2024-11-18 13:14:51.405139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.383 [2024-11-18 13:14:51.473240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:54.383 [2024-11-18 13:14:51.473805] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:54.383 [2024-11-18 13:14:51.473981] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.383 13:14:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:54.384 5000+0 records in 00:31:54.384 5000+0 records out 00:31:54.384 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0168769 s, 607 MB/s 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:54.384 AIO0 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:54.384 [2024-11-18 13:14:51.597936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:54.384 [2024-11-18 13:14:51.638257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2561654 0 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2561654 0 idle 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2561654 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2561654 -w 256 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2561654 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0' 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2561654 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2561654 1 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2561654 1 idle 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2561654 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2561654 -w 256 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:54.384 13:14:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2561698 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2561698 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2561753 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2561654 0 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2561654 0 busy 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2561654 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2561654 -w 256 00:31:54.384 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2561654 root 20 0 128.2g 46848 33792 R 66.7 0.0 0:00.36 reactor_0' 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2561654 root 20 0 128.2g 46848 33792 R 66.7 0.0 0:00.36 reactor_0 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2561654 1 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2561654 1 busy 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2561654 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2561654 -w 256 00:31:54.642 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:54.900 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2561698 root 20 0 128.2g 46848 33792 R 87.5 0.0 0:00.23 reactor_1' 00:31:54.900 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2561698 root 20 0 128.2g 46848 33792 R 87.5 0.0 0:00.23 reactor_1 00:31:54.900 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:54.900 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:54.900 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=87.5 00:31:54.900 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=87 00:31:54.900 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:54.900 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:54.900 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:54.900 13:14:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:54.900 13:14:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2561753 00:32:04.865 Initializing NVMe Controllers 00:32:04.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:04.865 Controller IO queue size 256, less than required. 00:32:04.865 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:04.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:04.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:04.865 Initialization complete. Launching workers. 00:32:04.865 ======================================================== 00:32:04.865 Latency(us) 00:32:04.865 Device Information : IOPS MiB/s Average min max 00:32:04.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16103.00 62.90 15906.06 2879.43 31742.69 00:32:04.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16227.50 63.39 15779.57 7711.04 27143.64 00:32:04.865 ======================================================== 00:32:04.865 Total : 32330.49 126.29 15842.57 2879.43 31742.69 00:32:04.865 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2561654 0 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2561654 0 idle 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2561654 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2561654 -w 256 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2561654 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.24 reactor_0' 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2561654 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.24 reactor_0 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2561654 1 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2561654 1 idle 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2561654 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2561654 -w 256 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2561698 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2561698 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:04.865 13:15:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:05.438 13:15:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:05.438 13:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:32:05.438 13:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:32:05.438 13:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:32:05.438 13:15:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2561654 0 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2561654 0 idle 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2561654 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2561654 -w 256 00:32:07.663 13:15:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2561654 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.52 reactor_0' 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2561654 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.52 reactor_0 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2561654 1 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2561654 1 idle 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2561654 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2561654 -w 256 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2561698 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1' 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2561698 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:07.663 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:07.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:07.923 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:08.183 rmmod nvme_tcp 00:32:08.183 rmmod nvme_fabrics 00:32:08.183 rmmod nvme_keyring 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2561654 ']' 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2561654 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 2561654 ']' 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 2561654 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2561654 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2561654' 00:32:08.183 killing process with pid 2561654 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 2561654 00:32:08.183 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 2561654 00:32:08.442 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:08.442 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:08.442 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:08.442 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:08.442 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:08.442 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:08.442 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:08.442 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:08.442 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:08.442 13:15:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.442 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:08.442 13:15:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.347 13:15:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:10.347 00:32:10.348 real 0m22.901s 00:32:10.348 user 0m39.762s 00:32:10.348 sys 0m8.362s 00:32:10.348 13:15:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:10.348 13:15:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:10.348 ************************************ 00:32:10.348 END TEST nvmf_interrupt 00:32:10.348 ************************************ 00:32:10.606 00:32:10.606 real 27m26.772s 00:32:10.606 user 56m31.356s 00:32:10.606 sys 9m21.691s 00:32:10.606 13:15:08 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:10.606 13:15:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:10.606 ************************************ 00:32:10.606 END TEST nvmf_tcp 00:32:10.606 ************************************ 00:32:10.606 13:15:08 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:32:10.606 13:15:08 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:10.606 13:15:08 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:10.606 13:15:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:10.606 13:15:08 -- common/autotest_common.sh@10 -- # set +x 00:32:10.606 ************************************ 00:32:10.606 START TEST spdkcli_nvmf_tcp 00:32:10.606 ************************************ 00:32:10.606 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:10.606 * Looking for test storage... 00:32:10.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:10.606 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:10.606 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:32:10.606 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:10.606 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:10.606 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:10.606 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:10.606 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:10.607 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:10.607 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:10.607 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:10.607 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:10.607 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:10.607 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:10.607 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:10.607 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:10.607 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:10.607 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:10.607 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:10.607 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:10.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.866 --rc genhtml_branch_coverage=1 00:32:10.866 --rc genhtml_function_coverage=1 00:32:10.866 --rc genhtml_legend=1 00:32:10.866 --rc geninfo_all_blocks=1 00:32:10.866 --rc geninfo_unexecuted_blocks=1 00:32:10.866 00:32:10.866 ' 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:10.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.866 --rc genhtml_branch_coverage=1 00:32:10.866 --rc genhtml_function_coverage=1 00:32:10.866 --rc genhtml_legend=1 00:32:10.866 --rc geninfo_all_blocks=1 00:32:10.866 --rc geninfo_unexecuted_blocks=1 00:32:10.866 00:32:10.866 ' 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:10.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.866 --rc genhtml_branch_coverage=1 00:32:10.866 --rc genhtml_function_coverage=1 00:32:10.866 --rc genhtml_legend=1 00:32:10.866 --rc geninfo_all_blocks=1 00:32:10.866 --rc geninfo_unexecuted_blocks=1 00:32:10.866 00:32:10.866 ' 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:10.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.866 --rc genhtml_branch_coverage=1 00:32:10.866 --rc genhtml_function_coverage=1 00:32:10.866 --rc genhtml_legend=1 00:32:10.866 --rc geninfo_all_blocks=1 00:32:10.866 --rc geninfo_unexecuted_blocks=1 00:32:10.866 00:32:10.866 ' 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:10.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2564963 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2564963 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 2564963 ']' 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:10.866 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:10.867 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.867 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:10.867 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:10.867 [2024-11-18 13:15:08.404722] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:32:10.867 [2024-11-18 13:15:08.404774] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564963 ] 00:32:10.867 [2024-11-18 13:15:08.479550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:10.867 [2024-11-18 13:15:08.521587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.867 [2024-11-18 13:15:08.521590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.126 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:11.126 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:32:11.126 13:15:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:11.126 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:11.126 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:11.126 13:15:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:11.126 13:15:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:11.126 13:15:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:11.126 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:11.126 13:15:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:11.126 13:15:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:11.126 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:11.126 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:11.126 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:11.126 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:11.126 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:11.126 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:11.126 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:11.126 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:11.126 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:11.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:11.126 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4 allow_any_host=True'\'' 00:32:11.126 '\''/nvmf/referral/nqn.2014-08.org.nvmexpress.discovery/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:11.126 ' 00:32:14.408 [2024-11-18 13:15:11.361194] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.342 [2024-11-18 13:15:12.705669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:17.868 [2024-11-18 13:15:15.189415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:19.767 [2024-11-18 13:15:17.348145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:21.667 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:21.667 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:21.667 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:21.667 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:21.667 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:21.668 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:21.668 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:21.668 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:21.668 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:21.668 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:21.668 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:21.668 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4 allow_any_host=True', False] 00:32:21.668 Executing command: ['/nvmf/referral/nqn.2014-08.org.nvmexpress.discovery/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:21.924 13:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@67 -- # timing_exit spdkcli_create_nvmf_config 00:32:21.924 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:21.924 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:21.924 13:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # timing_enter spdkcli_check_match 00:32:21.924 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:21.924 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:21.925 13:15:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # check_match 00:32:21.925 13:15:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:22.490 13:15:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:22.490 [MATCHING FAILED, COMPLETE FILE (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test) BELOW] 00:32:22.491 o- nvmf ...................................................................................................................... [...] 00:32:22.491 o- referral ....................................................................................................... [Referrals: 1] 00:32:22.491 | o- nqn.2014-08.org.nvmexpress.discovery .................................................................................. [TCP] 00:32:22.491 | o- hosts .......................................................................................................... [Hosts: 1] 00:32:22.491 | o- nqn.2014-08.org.spdk:cnode2 ....................................................................................... [...] 00:32:22.491 o- subsystem ..................................................................................................... [Subsystems: 4] 00:32:22.491 | o- nqn.2014-08.org.nvmexpress.discovery ......................................................... [st=Discovery, Allow any host] 00:32:22.491 | | o- hosts .......................................................................................................... [Hosts: 0] 00:32:22.491 | | o- listen_addresses ........................................................................................... [Addresses: 0] 00:32:22.491 | o- nqn.2014-08.org.spdk:cnode1 ...................................................... [sn=N37SXV509SRW, st=NVMe, Allow any host] 00:32:22.491 | | o- hosts .......................................................................................................... [Hosts: 1] 00:32:22.491 | | | o- nqn.2014-08.org.spdk:cnode2 ....................................................................................... [...] 00:32:22.491 | | o- listen_addresses ........................................................................................... [Addresses: 3] 00:32:22.491 | | | o- 127.0.0.1:4260 .................................................................................................... [TCP] 00:32:22.491 | | | o- 127.0.0.1:4261 .................................................................................................... [TCP] 00:32:22.491 | | | o- 127.0.0.1:4262 .................................................................................................... [TCP] 00:32:22.491 | | o- namespaces ................................................................................................ [Namespaces: 4] 00:32:22.491 | | o- Malloc3 .................................................................................................... [Malloc3, 1] 00:32:22.491 | | o- Malloc4 .................................................................................................... [Malloc4, 2] 00:32:22.491 | | o- Malloc5 .................................................................................................... [Malloc5, 3] 00:32:22.491 | | o- Malloc6 .................................................................................................... [Malloc6, 4] 00:32:22.491 | o- nqn.2014-08.org.spdk:cnode2 ...................................................... [sn=N37SXV509SRD, st=NVMe, Allow any host] 00:32:22.491 | | o- hosts .......................................................................................................... [Hosts: 0] 00:32:22.491 | | o- listen_addresses ........................................................................................... [Addresses: 1] 00:32:22.491 | | | o- 127.0.0.1:4260 .................................................................................................... [TCP] 00:32:22.491 | | o- namespaces ................................................................................................ [Namespaces: 1] 00:32:22.491 | | o- Malloc2 .................................................................................................... [Malloc2, 1] 00:32:22.491 | o- nqn.2014-08.org.spdk:cnode3 ...................................................... [sn=N37SXV509SRR, st=NVMe, Allow any host] 00:32:22.491 | o- hosts .......................................................................................................... [Hosts: 2] 00:32:22.491 | | o- nqn.2014-08.org.spdk:cnode1 ....................................................................................... [...] 00:32:22.491 | | o- nqn.2014-08.org.spdk:cnode2 ....................................................................................... [...] 00:32:22.491 | o- listen_addresses ........................................................................................... [Addresses: 2] 00:32:22.491 | | o- 127.0.0.1:4260 .................................................................................................... [TCP] 00:32:22.491 | | o- 127.0.0.1:4261 .................................................................................................... [TCP] 00:32:22.491 | o- namespaces ................................................................................................ [Namespaces: 1] 00:32:22.491 | o- Malloc1 .................................................................................................... [Malloc1, 1] 00:32:22.491 o- transport ..................................................................................................... [Transports: 1] 00:32:22.491 o- TCP ................................................................................................................... [...] 00:32:22.491 00:32:22.491 [EOF] 00:32:22.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match:1 o- nvmf ...................................................................................................................... [...] 00:32:22.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test:1 o- nvmf ...................................................................................................................... [...] 00:32:22.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match:2 o- referral ....................................................................................................... [Referrals: 1] 00:32:22.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test:2 o- referral ....................................................................................................... [Referrals: 1] 00:32:22.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match:3 | o- nqn.2014-08.org.nvmexpress.discovery .................................................. [TCP, Secure channel, Allow any host] 00:32:22.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test:3 | o- nqn.2014-08.org.nvmexpress.discovery .................................................................................. [TCP] 00:32:22.491 FAIL: match: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match:3 did not match pattern 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # trap - ERR 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # print_backtrace 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1155 -- # [[ ehxBET =~ e ]] 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1157 -- # args=('--transport=tcp') 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1157 -- # local args 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1159 -- # xtrace_disable 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.491 ========== Backtrace start: ========== 00:32:22.491 00:32:22.491 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh:45 -> check_match([]) 00:32:22.491 ... 00:32:22.491 40 waitforlisten $vhost_tgt_pid 00:32:22.491 41 } 00:32:22.491 42 00:32:22.491 43 function check_match() { 00:32:22.491 44 $rootdir/scripts/spdkcli.py ll $SPDKCLI_BRANCH > $testdir/match_files/${MATCH_FILE} 00:32:22.491 => 45 $rootdir/test/app/match/match $testdir/match_files/${MATCH_FILE}.match 00:32:22.491 46 rm -f $testdir/match_files/${MATCH_FILE} 00:32:22.491 47 } 00:32:22.491 48 00:32:22.491 49 function wait_for_all_nvme_ctrls_to_detach() { 00:32:22.491 50 while (($(rpc_cmd bdev_nvme_get_controllers | jq '.|length') != 0)); do :; done 00:32:22.491 ... 00:32:22.491 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh:70 -> main(["--transport=tcp"]) 00:32:22.491 ... 00:32:22.491 65 '/nvmf/referral/nqn.2014-08.org.nvmexpress.discovery/hosts create nqn.2014-08.org.spdk:cnode2' 'nqn.2014-08.org.spdk:cnode2' True 00:32:22.491 66 " 00:32:22.491 67 timing_exit spdkcli_create_nvmf_config 00:32:22.491 68 00:32:22.491 69 timing_enter spdkcli_check_match 00:32:22.491 => 70 check_match 00:32:22.491 71 timing_exit spdkcli_check_match 00:32:22.491 72 00:32:22.491 73 timing_enter spdkcli_clear_nvmf_config 00:32:22.491 74 $spdkcli_job "'/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1' 'Malloc3' 00:32:22.491 75 '/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all' 'Malloc4' 00:32:22.491 ... 00:32:22.491 00:32:22.491 ========== Backtrace end ========== 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1196 -- # return 0 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@1 -- # cleanup 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2564963 ']' 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2564963 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 2564963 ']' 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 2564963 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:22.491 13:15:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2564963 00:32:22.491 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:22.491 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:22.491 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2564963' 00:32:22.491 killing process with pid 2564963 00:32:22.491 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 2564963 00:32:22.491 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 2564963 00:32:22.750 13:15:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:22.750 13:15:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:22.750 13:15:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:22.750 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # trap - ERR 00:32:22.750 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # print_backtrace 00:32:22.750 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1155 -- # [[ ehxBET =~ e ]] 00:32:22.750 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1157 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh' 'spdkcli_nvmf_tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf') 00:32:22.750 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1157 -- # local args 00:32:22.750 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1159 -- # xtrace_disable 00:32:22.750 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.751 ========== Backtrace start: ========== 00:32:22.751 00:32:22.751 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1127 -> run_test(["spdkcli_nvmf_tcp"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh"],["--transport=tcp"]) 00:32:22.751 ... 00:32:22.751 1122 timing_enter $test_name 00:32:22.751 1123 echo "************************************" 00:32:22.751 1124 echo "START TEST $test_name" 00:32:22.751 1125 echo "************************************" 00:32:22.751 1126 xtrace_restore 00:32:22.751 1127 time "$@" 00:32:22.751 1128 xtrace_disable 00:32:22.751 1129 echo "************************************" 00:32:22.751 1130 echo "END TEST $test_name" 00:32:22.751 1131 echo "************************************" 00:32:22.751 1132 timing_exit $test_name 00:32:22.751 ... 00:32:22.751 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh:282 -> main(["/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf"]) 00:32:22.751 ... 00:32:22.751 277 run_test "nvmf_rdma" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:32:22.751 278 run_test "spdkcli_nvmf_rdma" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:32:22.751 279 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" ]; then 00:32:22.751 280 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:32:22.751 281 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:32:22.751 => 282 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:32:22.751 283 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:32:22.751 284 fi 00:32:22.751 285 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:32:22.751 286 run_test "nvmf_abort_qd_sizes" $rootdir/test/nvmf/target/abort_qd_sizes.sh 00:32:22.751 287 # The keyring tests utilize NVMe/TLS 00:32:22.751 ... 00:32:22.751 00:32:22.751 ========== Backtrace end ========== 00:32:22.751 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1196 -- # return 0 00:32:22.751 00:32:22.751 real 0m12.116s 00:32:22.751 user 0m26.456s 00:32:22.751 sys 0m0.704s 00:32:22.751 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1 -- # autotest_cleanup 00:32:22.751 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1394 -- # local autotest_es=255 00:32:22.751 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1395 -- # xtrace_disable 00:32:22.751 13:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:37.640 INFO: APP EXITING 00:32:37.640 INFO: killing all VMs 00:32:37.640 INFO: killing vhost app 00:32:37.640 INFO: EXIT DONE 00:32:40.176 Waiting for block devices as requested 00:32:40.176 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:32:40.435 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:40.435 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:40.435 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:40.694 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:40.694 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:40.694 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:40.954 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:40.954 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:40.954 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:41.213 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:41.213 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:41.213 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:41.213 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:41.472 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:41.472 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:41.472 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:44.763 Cleaning 00:32:44.763 Removing: /var/run/dpdk/spdk0/config 00:32:44.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:44.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:44.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:44.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:44.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:44.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:44.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:44.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:44.763 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:44.763 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:44.763 Removing: /var/run/dpdk/spdk1/config 00:32:44.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:44.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:44.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:44.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:44.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:44.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:44.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:44.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:44.763 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:44.763 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:44.763 Removing: /var/run/dpdk/spdk2/config 00:32:44.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:44.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:44.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:44.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:44.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:44.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:44.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:44.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:44.763 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:44.763 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:44.763 Removing: /var/run/dpdk/spdk3/config 00:32:44.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:44.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:44.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:44.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:44.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:44.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:44.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:44.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:44.763 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:44.763 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:44.763 Removing: /var/run/dpdk/spdk4/config 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:44.763 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:44.763 Removing: /dev/shm/bdev_svc_trace.1 00:32:44.763 Removing: /dev/shm/nvmf_trace.0 00:32:44.763 Removing: /dev/shm/spdk_tgt_trace.pid2127050 00:32:44.763 Removing: /var/run/dpdk/spdk0 00:32:44.763 Removing: /var/run/dpdk/spdk1 00:32:44.763 Removing: /var/run/dpdk/spdk2 00:32:44.763 Removing: /var/run/dpdk/spdk3 00:32:44.763 Removing: /var/run/dpdk/spdk4 00:32:44.763 Removing: /var/run/dpdk/spdk_pid1946712 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2124900 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2125964 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2127050 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2127685 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2128631 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2128665 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2129677 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2129849 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2130131 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2131725 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2133003 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2133304 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2133580 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2133890 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2134180 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2134435 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2134682 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2134970 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2135712 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2138708 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2138967 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2139221 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2139234 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2139722 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2139729 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2140223 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2140226 00:32:44.763 Removing: /var/run/dpdk/spdk_pid2140500 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2140711 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2140803 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2140993 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2141564 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2141821 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2142115 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2145831 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2150195 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2160787 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2161331 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2165603 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2166069 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2170301 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2176164 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2178872 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2189079 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2198005 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2200353 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2201279 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2218158 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2222228 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2267644 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2272962 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2278942 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2285444 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2285446 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2286355 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2287271 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2288021 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2288657 00:32:44.764 Removing: /var/run/dpdk/spdk_pid2288659 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2288892 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2289099 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2289121 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2289996 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2290740 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2291654 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2292262 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2292337 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2292599 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2294082 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2295090 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2303416 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2332678 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2337169 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2338887 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2340571 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2340742 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2340976 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2340995 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2341496 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2343330 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2344105 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2344593 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2346710 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2347191 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2347904 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2352074 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2357579 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2357580 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2357581 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2361356 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2369716 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2374040 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2380245 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2381545 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2382879 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2384209 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2388840 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2393153 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2397062 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2404641 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2404643 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2409356 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2409583 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2409753 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2410059 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2410107 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2414641 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2415167 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2419667 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2422333 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2428120 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2433463 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2442245 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2449327 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2449387 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2468060 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2468531 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2469304 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2469980 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2470945 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2471426 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2471906 00:32:45.023 Removing: /var/run/dpdk/spdk_pid2472582 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2476841 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2477074 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2483144 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2483203 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2488680 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2492839 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2502535 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2503186 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2507384 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2507684 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2511750 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2518084 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2520665 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2530600 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2539291 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2540911 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2541823 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2557946 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2561753 00:32:45.283 Removing: /var/run/dpdk/spdk_pid2564963 00:32:45.283 Clean 00:32:47.187 13:15:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1451 -- # return 255 00:32:47.187 13:15:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1 -- # : 00:32:47.187 13:15:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1 -- # exit 1 00:32:47.187 13:15:44 -- spdk/autorun.sh@27 -- $ trap - ERR 00:32:47.187 13:15:44 -- spdk/autorun.sh@27 -- $ print_backtrace 00:32:47.187 13:15:44 -- common/autotest_common.sh@1155 -- $ [[ ehxBET =~ e ]] 00:32:47.187 13:15:44 -- common/autotest_common.sh@1157 -- $ args=('/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf') 00:32:47.187 13:15:44 -- common/autotest_common.sh@1157 -- $ local args 00:32:47.187 13:15:44 -- common/autotest_common.sh@1159 -- $ xtrace_disable 00:32:47.187 13:15:44 -- common/autotest_common.sh@10 -- $ set +x 00:32:47.187 ========== Backtrace start: ========== 00:32:47.187 00:32:47.187 in spdk/autorun.sh:27 -> main(["/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf"]) 00:32:47.187 ... 00:32:47.187 22 trap 'timing_finish || exit 1' EXIT 00:32:47.187 23 00:32:47.187 24 # Runs agent scripts 00:32:47.187 25 $rootdir/autobuild.sh "$conf" 00:32:47.187 26 if ((SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1)); then 00:32:47.187 => 27 sudo -E $rootdir/autotest.sh "$conf" 00:32:47.187 28 fi 00:32:47.187 ... 00:32:47.187 00:32:47.187 ========== Backtrace end ========== 00:32:47.187 13:15:44 -- common/autotest_common.sh@1196 -- $ return 0 00:32:47.187 13:15:44 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:47.187 13:15:44 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:32:47.187 13:15:44 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:47.187 13:15:44 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:47.187 13:15:44 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:47.200 [Pipeline] } 00:32:47.218 [Pipeline] // stage 00:32:47.225 [Pipeline] } 00:32:47.242 [Pipeline] // timeout 00:32:47.250 [Pipeline] } 00:32:47.254 ERROR: script returned exit code 1 00:32:47.254 Setting overall build result to FAILURE 00:32:47.269 [Pipeline] // catchError 00:32:47.274 [Pipeline] } 00:32:47.289 [Pipeline] // wrap 00:32:47.295 [Pipeline] } 00:32:47.309 [Pipeline] // catchError 00:32:47.318 [Pipeline] stage 00:32:47.321 [Pipeline] { (Epilogue) 00:32:47.334 [Pipeline] catchError 00:32:47.336 [Pipeline] { 00:32:47.348 [Pipeline] echo 00:32:47.350 Cleanup processes 00:32:47.356 [Pipeline] sh 00:32:47.645 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:47.645 2110450 sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731930317 00:32:47.645 2110505 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731930317 00:32:47.645 2574931 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:47.660 [Pipeline] sh 00:32:47.947 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:47.947 ++ grep -v 'sudo pgrep' 00:32:47.947 ++ awk '{print $1}' 00:32:47.947 + sudo kill -9 2110450 2110505 00:32:47.959 [Pipeline] sh 00:32:48.244 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:56.377 [Pipeline] sh 00:32:56.662 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:56.662 Artifacts sizes are good 00:32:56.678 [Pipeline] archiveArtifacts 00:32:56.685 Archiving artifacts 00:32:56.833 [Pipeline] sh 00:32:57.121 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:32:57.137 [Pipeline] cleanWs 00:32:57.147 [WS-CLEANUP] Deleting project workspace... 00:32:57.147 [WS-CLEANUP] Deferred wipeout is used... 00:32:57.154 [WS-CLEANUP] done 00:32:57.156 [Pipeline] } 00:32:57.175 [Pipeline] // catchError 00:32:57.185 [Pipeline] echo 00:32:57.187 Tests finished with errors. Please check the logs for more info. 00:32:57.191 [Pipeline] echo 00:32:57.193 Execution node will be rebooted. 00:32:57.208 [Pipeline] build 00:32:57.211 Scheduling project: reset-job 00:32:57.225 [Pipeline] sh 00:32:57.521 + logger -p user.err -t JENKINS-CI 00:32:57.620 [Pipeline] } 00:32:57.633 [Pipeline] // stage 00:32:57.637 [Pipeline] } 00:32:57.650 [Pipeline] // node 00:32:57.655 [Pipeline] End of Pipeline 00:32:57.689 Finished: FAILURE